0% found this document useful (0 votes)
76 views56 pages

Unit 4

Uploaded by

bhawk6917
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views56 pages

Unit 4

Uploaded by

bhawk6917
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Amazon RDS

Multi-AZ deployments

I recommend that they migrate their on-premises database to Amazon Relational Database Service
(Amazon RDS), and use a Multi-AZ deployment for high availability. In a Multi-AZ deployment,
Amazon RDS automatically creates a primary database (DB) instance and synchronously replicates
the data to an instance in a different Availability Zone. When it detects a failure, Amazon RDS
automatically fails over to a standby instance without manual intervention. This failover mechanism
meets the customer’s need to have a highly available database.

The following diagram shows a Multi-AZ deployment with one standby DB instance, and how it
works.

For even higher availability, the customer could explore deploying two standby DB instances, and
use three Availability Zones instead of two.

Say that you deploy MySQL or PostgreSQL databases in three Availability Zones by using Amazon
RDS Multi-AZ with two readable standbys. With this configuration, you can see automatic failovers
typically happen in under 35 seconds, and with a transaction-commit latency that can be up to two
times faster when compared to an Amazon RDS Multi-AZ deployment with one standby. You can
also gain additional read capacity, and a choice of AWS Graviton2–based or Intel–based instances
for compute.

The following diagram shows a Multi-AZ deployment with two standby DB instances, and how it
works.

Read replicas
Customers can also make RDS more highly available by using read replicas.
Amazon RDS read replicas provide enhanced performance and durability for Amazon RDS DB
instances. For read-heavy database workloads, read replicas make it easier to elastically scale out
beyond the capacity constraints of a single DB instance.
You can create one or more replicas of a given source DB instance and serve high-volume
application read traffic from multiple copies of your data, which increases aggregate read
throughput. Read replicas can also be promoted to become standalone DB instances, when needed.
Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, Microsoft
SQL Server, and Amazon Aurora.
For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS
creates a second DB instance by using a snapshot of the source DB instance. Amazon RDS then
uses the engine’s native asynchronous replication to update the read replica when there’s a change
to the source DB instance.
The read replica operates as a DB instance that allows only read-only connections. Applications
can connect to a read replica like they would connect to any DB instance. Amazon RDS replicates
all databases in the source DB instance.

Here’s an example of when to use a read replica. Say that you’re running reports on your database,
which is causing performance issues with CPU-intensive reads. You can use a read replica and
direct all the reporting queries to that replica instead of to the primary instance. Offloading some of
the intense queries to the replica should result in enhanced performance on the primary instance.

Scaling Amazon RDS instances

Scale your instance up vertically

When you create an RDS DB instance, you choose a database instance type and size. Amazon RDS
provides a selection of instance types that are optimized to fit different use cases for relational
databases. Instance types comprise varying combinations of CPU, memory, storage, and
networking capacity. You have the flexibility to choose the appropriate mix of resources for your
database. Each instance type includes several instance sizes, which means that you can scale your
database to your target workload’s requirements.
Not every instance type is supported for every database engine, version, edition or Region.
When you want to scale your DB instance, you can vertically scale the instance and choose a larger
instance size. This might be the route you choose to take when you need more CPU and storage
ability for an instance.

Use read replicas

If you need more CPU capabilities but don’t need more storage, you might choose to create read
replicas to offload some of the workload to a secondary instance.

Enable RDS Storage Auto Scaling


If you need more storage, but don’t need more CPU, then you could scale the storage horizontally.
You can scale storage horizontally by allocating more storage volumes for your instance manually,
or by enabling RDS Storage Auto Scaling. RDS Storage Auto Scaling automatically scales storage
capacity in response to growing database workloads, with virtually zero downtime.
Previously, you needed to manually provision storage capacity based on anticipated application
demands. Underprovisioning could result in application downtime, and overprovisioning could
result in underutilized resources and higher costs. With RDS Storage Auto Scaling, you set your
desired maximum storage limit and Auto Scaling takes care of the rest.
RDS Storage Auto Scaling continuously monitors actual storage consumption, and scales capacity
up automatically when actual utilization approaches provisioned storage capacity.
Auto Scaling works with new and existing database instances. You can enable Auto Scaling with a
few clicks in the AWS Management Console.
Change the storage type for increased performance

Finally, here’s one last thing to consider. If you’re looking for better performance, consider using a
different storage type. For example, using Provisioned IOPS instead of General Purpose could give
you some of the performance enhancements that you want.
The following list briefly describes the three storage types:

 General Purpose SSD: General Purpose SSD volumes offer cost-effective storage that works
well for a broad range of workloads. These volumes deliver single-digit millisecond latencies
and the ability to burst to 3,000 IOPS for extended periods of time. Baseline performance for
these volumes is determined by the volume’s size.

 Provisioned IOPS: Provisioned IOPS storage is designed to meet the needs of I/O-intensive
workloads — particularly database workloads — that require low I/O latency and consistent
I/O throughput.
 Magnetic: Amazon RDS also supports magnetic storage for backward compatibility. We
recommend that you use General Purpose SSD or Provisioned IOPS for any new storage needs.
The maximum amount of storage that’s allowed for DB instances on magnetic storage is less
than that of the other storage types.

Amazon Aurora

The Amazon Aurora is a relational database service offered from amazon cloud. This is one of the
widely used services for the data storage, for low latency and value-based data storage and
processing. Amazon Aurora is a MySQL and PostgreSQL-compatible relational database
fabricated for the cloud, that consolidates the performance and accessibility of traditional databases
with the simplicity and reliability of commercial databases at 1/10th the cost. It works with a
clustered approach with data replication in the AWS accessibility zone for efficient data
availability.

The Amazon Aurora is packed with high-performance subsystems. It is MySQL and PostgreSQL
engines which take advantage of fast distributed storage. Aurora provides speed and throughput
up to 5 times of MySQL and 3 times of PostgreSQL with the existing system. It bolsters, high
storage capacity, which can scale up to 64 Terabytes of database size for enterprise
implementation. Amazon Aurora is completely managed by Amazon Relational Database Service
(RDS), which automates tedious administration undertakings like hardware provisioning, database
arrangement, fixing, and reinforcements.

Amazon Aurora is built on top of an innovative distributed architecture that separates the storage
and compute layers of the database engine. The storage layer is distributed across multiple replicas,
while the compute layer runs on instances that are separate from the storage layer. This architecture
allows for automatic scaling of storage and compute resources independently and also provides
better fault tolerance and availability.

Features of Amazon Aurora

1. Availability and Durability: AWS Aurora has a feature of fault-tolerant and self-mending
storage built for the cloud. It offers an incredible accessibility of 99.99%. The storage in
the cloud replicates the 6 copies of the information across 3 Availability Zones. The AWS
Aurora backs up the data continuously due to the safety purpose and precaution from
storage failure.

2. Performance and Scalability: AWS Aurora provides 5 times the throughput of ordinary
MySQL. This performance is comparable with enterprise databases, at 1/10th the cost. The
user can scale database preparation up and down for smaller to larger instance varieties as
per the user needs. To scale scan capacity and performance, the user can add up to fifteen
low latency scan replicas across 3 convenience Zones. Amazon Aurora consequently
develops storage as required, up to 64TB per database instance.

3. Fully Managed: The Amazon Aurora is managed by Amazon Relational Database Service
(RDS). The user no longer needs to stress over database management tasks, for example,
hardware provisioning, software fixing, setup, configuration, or backups. Aurora
consequently and consistently screens and backs up the database to Amazon S3,
empowering granular point-in-time recuperation.

4. Security: Amazon Aurora provides numerous degrees of security to the database to


improve it among others. On an encoded Amazon Aurora occurrence, data within the
underlying storage is encrypted. The administration is through AWS Key Management
Service and encryption of information in transit using SSL. In addition, there are the
automatic reinforcements, snapshots, and replica within the same cluster.

5. Migration Support: MySQL and PostgreSQL compatibility make Amazon Aurora a


convincing target for database relocations to the cloud. If the users want to migrate from
MySQL or PostgreSQL, can see migration documentation for a list of tools and options.
To move from commercial database engines, the user can use the AWS Database Migration
Service for a safe migration with minimal downtime.

6. Compatibility with MySql and PostgreSQL: The Amazon Aurora database engine is
perfectly compatible with existing MySQL and PostgreSQL open supply databases, and
adds compatibility for new releases frequently. This means that the user can relocate
MySQL or PostgreSQL databases to Aurora using standard MySQL or PostgreSQL
import/export tools or previews. It also means the user using code, applications, drivers,
and tools with existing databases can also use it with Amazon Aurora with little or no
modification.

7. Cost: Amazon Aurora is designed to be cost-effective. You only pay for what you use, and
you can scale your database up or down as needed. Additionally, Aurora provides cost-
saving features such as automated storage optimization and the ability to pause or stop your
database when it's not in use.

How does Amazon Aurora Work

Aurora database cluster comprises of Primary database and Aurora replica database and a cluster
volume to deal with the data for those database instances. Aurora cluster volume is certifiably not
a physical but a virtual database storage volume that ranges over various Availability Zones to
support worldwide applications better. Each zone has its duplicate of database cluster information.

 The primary database is where all read and write operations are done over a cluster volume.
Each cluster in Aurora will have one primary database instance.

 Its equitable and replica of the primary database instance whose sole responsibility is to
simply give information i.e., only read operations. There can be 15 replicas for a primary
database instance to maintain high accessibility and availability in all the Zones. In a fail-
safe condition, Aurora will switch to a replica when a Primary database is not available.
Replicas help in reducing the read workload over primary database.

 There can be a multi-master cluster likewise for Aurora. In multi-master replication, all the
database instances would have a read and write capabilities. In AWS terminology they are
known as reader and writer database instances.

 The user can configure to keep a backup of its database on Amazon S3. This ensures the
safety of the user's database even in the worst cases where the whole cluster is down.

 For an unpredictable workload, user can use the Aurora Serverless to automatically start
scaling and shut down the database to match application demand.

Understanding the Amazon RDS Shared Responsibility Model

In the Amazon RDS Shared Responsibility Model, AWS and the customer share duties to ensure
the security, availability, and performance of database services. AWS manages the infrastructure,
while customers take control of their data and database management.

AWS Responsibilities

 Infrastructure and Hosting: AWS takes care of the foundational infrastructure for RDS,
including data centers, hardware along with networking. AWS ensures the physical
security of servers also network connectivity and operational aspects like power and
cooling.

 Database Software and Patching: AWS manages the installation, maintenance, and
patching of the database engine. This includes automatic updates and bug fixes for
supported versions of database engines, ensuring customers always operate on secure and
stable software.

 Backup and High Availability: AWS automates regular backups and offers features like
Multi-AZ deployments, which provide failover support across availability zones to
maintain uptime and minimize disruptions. AWS also manages storage scaling, data
replication, and recovery processes.

 Compliance and Security in the Cloud: AWS handles security at the infrastructure level by
managing firewalls and supporting encryption both at rest and during transit. The company
ensures compliance with global standards but customers must also ensure their use of AWS
services aligns with applicable regulations.

Customer Responsibilities

 Database Configuration and Tuning: Customers are tasked with configuring the database
and setting parameters to optimize query performance. Tuning queries is essential for
improving performance and users can leverage tools like Amazon RDS Performance
Insights to identify potential bottlenecks.

 Data Security and Access Control: Customers must encrypt any sensitive data stored in the
database and manage user access using AWS IAM roles and policies. Effective permission
management is essential to prevent unauthorized access to databases.

 Monitoring and Query Performance: Customers are responsible for monitoring database
activity and query performance. Using tools like RDS Performance Insights along with
CloudWatch and CloudTrail they must consistently track queries and workloads to
maintain efficient database operation and promptly resolve performance issues.

How Amazon Aurora works with Amazon RDS

In the Amazon RDS Shared Responsibility Model AWS and the customer share duties to ensure
the security availability and performance of database services. AWS manages the infrastructure
while customers take control of their data and database management.

 Managed via Amazon RDS: Aurora utilizes the Amazon RDS platform for administrative
tasks such as provisioning patching backups and recovery. This management is performed
through the AWS Management Console AWS CLI and API and allows developers and
system administrators to focus on building and running their applications rather than
dealing with underlying infrastructure management.

 Operations Based on Clusters: Unlike standard RDS instances Aurora operates on entire
clusters of database servers that are automatically replicated. This architecture ensures high
availability easy scaling and efficient resource management

 High Availability: Aurora replicates data across multiple Availability Zones for fault
tolerance and automatic failover is handled by RDS in case of any failure.
 Automated Scalability with Aurora: Automated Scalability Aurora takes advantage of RDS
automatic scaling capabilities which enable it to adjust storage and compute resources
dynamically based on real-time workload demands.

 Seamless Data Migration Migrating: Seamless Data Migration Migrating from Amazon
RDS for MySQL or PostgreSQL to Aurora is simple. You can use Amazon RDS snapshots
or set up one-way replication to transfer your data smoothly. This allows you to benefit
from Aurora’s improved performance, scalability, and reliability without interrupting
existing workflows.

 DB Engine Selection: When setting up a new database in Amazon RDS, users can opt for
Aurora MySQL or Aurora PostgreSQL as the engine of choice. This offers the same
familiarity as using traditional MySQL or PostgreSQL engines but with Aurora’s
performance boosts and reliability features.

Amazon Aurora Pricing

Amazon Aurora uses a pay-as-you-go pricing model which means you only pay for the resources
you actually use. Here is a quick overview of the main factors that influence Aurora's pricing.

 Instance Pricing: Aurora charges based on the instance type and size you choose, with
different prices for MySQL and PostgreSQL compatible instances. Larger instances cost
more, but they also provide higher performance.

 Storage Costs: Aurora scales storage automatically according to your usage and you are
charged per gigabyte of storage used. The benefit here is you only pay for the storage you
need.

 Backup Storage: Aurora includes automated backups at no additional charge for up to the
same amount of storage as your database. Additional backup storage is charged per GB.

 I/O Requests: You are billed for the input/output operations (I/O requests) performed by
your database. Aurora offers cost efficiency by using optimized I/O operations for high
performance.

 Data Transfer: Data transfer between Amazon Aurora and other AWS services is generally
free within the same region and while charges may apply for cross-region data transfer.

Advantages of Amazon Aurora

 Security: Aurora is service from Amazon, the user is assured about the security and can
use the IAM features.
 Availability: Multiple replications of DB instance, over numerous zones guarantees high
accessibility.

 Scalability: With Aurora serverless, the user can set-up the database to automatically scale
up and scale down with application demand.

 Performance: With simplicity & cost-adequacy as open-source database.

 Upkeep: Aurora has zero server maintenance. 5 times faster than MySQL and 3 times faster
than PostgreSQL

 Management Console: Amazon Management Console is easy to use and drag features to
immediately set-up the Aurora Cluster.

Limitation of Amazon Aurora

 At present backings MySQL-5.6.10 so if the user needs new features or want an older
version of MySQL then the user can't access it.

 The user can't use MyISAM tables since Aurora only supports InnoDB at present.

What is DynamoDB?

Amazon DynamoDB is a cloud-native NoSQL primarily key-value database. Let’s define each
of those terms.

 DynamoDB is cloud-native in that it does not run on-premises or even in a hybrid cloud; it only
runs on Amazon Web Services (AWS). This enables it to scale as needed without requiring a
customer’s capital investment in hardware. It also has attributes common to other cloud-native
applications, such as elastic infrastructure deployment (meaning that AWS will provision more
servers in the background as you request additional capacity).
 DynamoDB is NoSQL in that it does not support ANSI Structured Query Language (SQL).
Instead, it uses a proprietary API based on JavaScript Object Notation (JSON). This API is
generally not called directly by user developers, but invoked through AWS Software Developer
Kits (SDKs) for DynamoDB written in various programming languages (C++, Go, Java,
JavaScript, Microsoft .NET, Node.js, PHP, Python and Ruby).
 DynamoDB is primarily a key-value store in the sense that its data model consists of key-value
pairs in a schemaless, very large, non-relational table of rows (records). It does not support
relational database management systems (RDBMS) methods to join tables through foreign keys.
It can also support a document store data model using JavaScript Object Notation (JSON).

DynamoDB’s NoSQL design is oriented towards simplicity and scalability, which appeal to
developers and devops teams respectively. It can be used for a wide variety of semistructured data-
driven applications prevalent in modern and emerging use cases beyond traditional databases, from
the Internet of Things (IoT) to social apps or massive multiplayer games. With its broad
programming language support, it is easy for developers to get started and to create very
sophisticated applications using DynamoDB.

What is a DynamoDB Database?

Outside of Amazon employees, the world doesn’t know much about the exact nature of this
database. There is a development version known as DynamoDB Local used to run on developer
laptops written in Java, but the cloud-native database architecture is proprietary closed-source.

While we cannot describe exactly what DynamoDB is, we can describe how you interact with it.
When you set up DynamoDB on AWS, you do not provision specific servers or allocate set
amounts of disk. Instead, you provision throughput — you define the database based
on provisioned capacity — how many transactions and how many kilobytes of traffic you wish to
support per second. Users specify a service level of read capacity units (RCUs) and write capacity
units (WCUs).
As stated above, users generally do not directly make DynamoDB API calls. Instead, they will
integrate an AWS SDK into their application, which will handle the back-end communications
with the server.

DynamoDB data modeling needs to be denormalized. For developers used to working with both
SQL and NoSQL databases, the process of rethinking their data model is nontrivial, but also not
insurmountable.

DynamoDB, ACID and BASE

An ACID database is a database that provides the following properties:

 Atomicity
 Consistency
 Isolation
 Durability

DynamoDB, when using DynamoDB Transactions, displays ACID properties.

However, without the use of transactions, DynamoDB is usually considered to display BASE
properties:

 Basically Available
 Soft-state
 Eventually consistent

DynamoDB Scalability and High Availability

DynamoDB scalability includes methods such as autosharding and load-balancing. Autosharding


means that when load on a single Amazon server gets to a certain point, the database can select a
certain amount of records and place that data on a new node. Traffic between the new and existing
servers is load-balanced so that, ideally, no one node is impacted with more traffic than others.
However, the exact methods of how the database supports autosharding and load-balancing are
proprietary, part of its internal operational mechanics, and are not visible to nor controllable by
users.
Amazon DynamoDB Data Modeling

As mentioned before, DynamoDB is a key-value store database that uses a documented-oriented


JSON data model. Data is indexed using a primary key composed of a partition key and a sort key.
There is no set schema to data in the same table; each partition can be very different from others.
Unlike traditional SQL systems where data models can be created long before needing to know
how the data will be analyzed, with DynamoDB, like many other NoSQL databases, data should
be modeled based on the types of queries you seek to run.
 Documentation for Partitions and Data Distribution in DynamoDB.
 Learn more about the DynamoDB data model and recommendations for partition keys.
 Read more DynamoDB best practices for designing and using partition keys effectively.

DynamoDB Architecture for Data Distribution

Amazon Web Services (AWS) guarantees that DynamoDB tables span Availability Zones. You
can also distribute your data across multiple regions in the database global tables to provide greater
resiliency in case of a disaster. However, with global tables you need to keep your data eventually
consistent.
When to Use DynamoDB

Amazon DynamoDB is most useful when you need to rapidly prototype and deploy a key-value
store database that can seamlessly scale to multiple gigabytes or terabytes of information — what
are often referred to as “Big Data” applications. Because of its emphasis on scalability and high
availability DynamoDB is also appropriate for “always on” use cases with high volume
transactional requests (reads and writes).

DynamoDB is inappropriate for extremely large data sets (petabytes) with high frequency
transactions where the cost of operating DynamoDB may make it prohibitive. It is also important
to remember DynamoDB is a NoSQL database that uses its own proprietary JSON-based query
API, so it should be used when data models do not require normalized data with JOINs across
tables which are more appropriate for SQL RDBMS systems.

Amazon DynamoDB Ecosystem

DynamoDB can be developed using Software Development Kits (SDKs) available from Amazon
in a number of programming languages.

 C++
 Clojure
 Coldfusion
 Erlang
 F#

 Go
 Groovy/Rails
 Java
 JavaScript
 .NET

 Node.js
 PHP
 Python
 Ruby
 Scala

There are also a number of integrations for DynamoDB to connect with other AWS services and
open source big data technologies, such as Apache Kafka, and Apache Hive or Apache
Spark via Amazon EMR.

DynamoDB Streams

Amazon DynamoDB Streams is a time-ordered sequence of item-level changes (insert,


update, delete) in a DynamoDB table.
It captures these changes in near real-time and stores them for up to 24 hours so applications
can process them later.

You can think of it like a change log for your DynamoDB table — every time something
changes, an event is recorded.

Key Features

 Time-ordered: Events are stored in the same order in which changes occur.
 Up to 24-hour retention: After 24 hours, the data is automatically removed.
 Granular details: Can store before and after images of items.
 Real-time event-driven: Easily integrate with AWS Lambda to react instantly to
changes.
 Shard-based: Data is partitioned into shards for high scalability.

How it Works

1. Enable Streams on a table (you choose what kind of data images to capture):
o Keys only – Only primary key attributes.
o New image – Full item after the change.
o Old image – Full item before the change.
o New and old images – Both before and after.
2. When an item changes:
o DynamoDB writes the change event to the stream.
o The event is assigned a sequence number and placed in the right shard.
3. Consumers read from the stream:
o AWS Lambda – Triggers automatically on each event.
o Kinesis Adapter – Reads with Kinesis Client Library (KCL).
o Custom consumers – You can poll the stream via the DynamoDB Streams API.
Common Use Cases

 Real-time data processing


E.g., updating a search index (Elasticsearch/OpenSearch) when a record changes.
 Change data capture (CDC)
Keep other data stores in sync with DynamoDB.
 Event-driven workflows
Trigger business processes when specific changes occur.
 Audit logging
Keep track of all changes for compliance or debugging.
 Cross-region replication
Maintain a backup or copy of a table in another region.

Architecture Example

Example scenario:

 A "Orders" table in DynamoDB.


 DynamoDB Streams enabled with New and Old Images.
 AWS Lambda function subscribed to the stream.
 Every time an order’s status changes, Lambda sends a notification to SNS.

Flow:

1. Client updates item in DynamoDB.


2. DynamoDB writes the change to the stream.
3. Lambda triggers from the stream.
4. Lambda processes the event (e.g., sends SMS/email).

Key Limits & Considerations

 Retention: Only 24 hours of data.


 Read throughput: Streams use read capacity separately from table reads.
 Ordering: Guaranteed per partition key, not across all items.
 Shard limits: Streams are split into shards (max ~2 MB/sec read, 1 MB/sec write per
shard).
 Pricing: Streams are charged for read requests and Lambda invocations, not storage.

AWS Management Tools


AWS Management Tools are a suite of services and utilities that help you monitor, control,
automate, and optimize AWS resources.
They provide administrators, developers, and DevOps teams with the ability to manage
infrastructure efficiently, ensure compliance, and improve operational performance.

1. AWS Management Console


 Purpose: A web-based graphical interface to access and manage AWS services.
 Key Features:
o User-friendly dashboard to launch, configure, and monitor services.
o Service search and favorites for quick access.
o Integrated resource tagging.
 Use Case: Ideal for manual setup, quick changes, and visualization of AWS resources.

2. AWS Command Line Interface (AWS CLI)

 Purpose: A unified command-line tool to manage AWS services programmatically.


 Key Features:
o Automate repetitive tasks via scripts.
o Supports complex configurations using JSON/YAML.
o Works with all major AWS services.
 Use Case: Efficient for DevOps automation and quick execution of bulk commands.

3. AWS SDKs (Software Development Kits)

 Purpose: Allow developers to interact with AWS services using popular programming
languages.
 Languages Supported: Java, Python (boto3), JavaScript, .NET, PHP, Ruby, Go, C++,
etc.
 Use Case: Integrating AWS capabilities directly into applications.

4. AWS CloudFormation

 Purpose: Infrastructure as Code (IaC) service for modeling and setting up AWS
resources automatically.
 Key Features:
o Define templates in JSON or YAML.
o Automates provisioning and updates.
o Supports cross-region and cross-account deployment.
 Use Case: Repeatable, version-controlled environment deployment.

5. AWS CloudTrail

 Purpose: Records AWS API calls for auditing and compliance.


 Key Features:
o Tracks “who did what, when, and from where.”
o Integrates with Amazon S3 for log storage.
o Supports real-time event monitoring with CloudWatch.
 Use Case: Security audits, compliance verification, and operational troubleshooting.

6. Amazon CloudWatch

 Purpose: Monitoring and observability service for AWS resources and applications.
 Key Features:
o Metrics, logs, and alarms.
o Automated actions based on thresholds.
o Dashboards for real-time insights.
 Use Case: Performance monitoring, operational alerts, and proactive scaling.

7. AWS Systems Manager

 Purpose: Centralized operational hub for managing AWS and hybrid cloud
infrastructure.
 Key Features:
o Run Command for remote automation.
o Patch Manager for security updates.
o Parameter Store for configuration data.
 Use Case: Automating patching, software inventory, and remote operations.

8. AWS Trusted Advisor

 Purpose: Automated best-practice checks and cost-optimization insights.


 Key Categories:
o Cost Optimization
o Performance
o Security
o Fault Tolerance
o Service Limits
 Use Case: Improve efficiency, security, and cost management.

9. AWS Config

 Purpose: Tracks configuration changes of AWS resources for compliance and


governance.
 Key Features:
o Resource inventory.
o Configuration history.
o Compliance auditing against defined rules.
 Use Case: Continuous compliance monitoring and remediation.

10. AWS Service Catalog

 Purpose: Allows organizations to create and manage approved catalogs of AWS


resources.
 Use Case: Standardizing deployments and ensuring governance in large teams.

11. AWS License Manager

 Purpose: Simplifies license tracking and compliance for software running on AWS.
 Use Case: Avoids license overuse and simplifies vendor audits.

Amazon CloudWatch
Amazon CloudWatch is a service used for monitoring and observing resources in real-time, built
for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch
provides users with data and actionable insights to monitor their respective applications, stimulate
system-wide performance changes, and optimize resource utilization. CloudWatch collects
monitoring and operational data in the form of logs, metrics, and events, providing its users with
an aggregated view of AWS resources, applications, and services that run on AWS. The
CloudWatch can also be used to detect anomalous behavior in the environments, set warnings and
alarms, visualize logs and metrics side by side, take automated actions, and troubleshoot issues.

Amazon CloudWatch Agent

Amazon CloudWatch is an open-source lightweight tool that is used to collect the data of the
resources in which they are deployed. Some of the data is as follows

 Metrics: Amazon CloudWatch agent will record the data of CPU utilization, memory
usage, disk i/o other system-level stats.

 Logs: It will collect all the logs which are used for the further analysis

 Events: Launching of significant instances, modifications to security groups, and other


events.

What is Amazon CloudWatch?

Amazon CloudWatch is a monitoring and observability service provided by Amazon Web Services
(AWS) that enables users to collect and track metrics, monitor log files, set alarms, and
automatically react to changes in AWS resources. It helps users gain insights into the operational
health, performance, and resource utilization of their AWS infrastructure and applications.

Why Amazon CloudWatch?

Amazon Cloud Watch is a monitoring service offered by Amazon Web Services to monitor
applications like the following.

 Performance.

 Health of the application.

 Monitors the resource use, etc.

You can set the alarm to the to the resource use of the applications when the limits are exceeded
then you will get the notification to the mail automatically.
How Amazon CloudWatch Works

At first Amazon Cloud watch will configured to the resource that you want to monitor from there
the agents that are configured will be used to collect the logs from the resources the service may
be run on-premises or AWS. CloudWatch also provides the overall view of the resources with the
help of a dashboard from where you can troubleshoot the issues. CloudWatch also performs the
operational changes depending on the changes made to the resources they will also perform
the AWS auto-scaling of the resources depending on the changes that occurred. CloudWatch
performs real-time analysis based on the logs that have been received.

Amazon CloudWatch Features

Metrics

 It represents a time-ordered set of data points that are published to Amazon CloudWatch.

 All data point is marked with a timestamp.

 Metric is a variable that is monitored and data points are the value of that variable over
time.

 They are uniquely defined by a name, namespace, and zero or more dimensions.

 Metric math is used to query multiple CloudWatch metrics and use math expressions to
create new time series based on these metrics

Dimensions

 A dimension is a name/value pair which uniquely identifies a metric.

 Dimensions are the unique identifiers for a metric, so whenever you add a unique
name/value pair to one of the metrics, you are creating a new variation of that metric.

Statistics

 Statistics are metric data aggregations over specified periods of time.

 The few available statistics on CloudWatch are maximum, minimum, sum, average, and
sample count.

Alarm

 It is used to automatically initiate actions on our behalf.

 It watches a single metric over a specified time period and performs one or more specified
actions based on the value of the metric.
 The estimated AWS charges can also be monitored using the alarm.

Percentiles

 It represents the relative weightage of the data in a dataset.

 It helps the user to get a better understanding of the distribution of metric data.

CloudWatch dashboard

 A user-friendly CloudWatch console is available which is used for monitoring resources


in a single view.

 There is no limit on the number of CloudWatch dashboards you can create.

 These dashboards are global and not region-specific.

CloudWatch agent

 It is required to be installed.

 It collects logs and system-level metrics from EC2 instances and on-premises servers.

CloudWatch Events

 CloudWatch events help you to create a set of rules that match with any event(i.e. stopping
of EC2 instance).

 These events can be routed to one or more targets like AWS Lambda functions, Amazon
SNS Topics, Amazon SQS queues, and other target types.

 CloudWatch Events observes the operational events continuously and whenever there is
any change in the state of the event, it performs the action by sending notifications,
activating lambda, etc.

 An event indicates a change in the AWS environment. Whenever there is a change in the
state of AWS resources, events are generated.

 Rules are used for matching events and routing to targets.

 Target process events. They include Amazon EC2 instances, AWS Lambda functions, etc.
A target receives the events in JSON format.

CloudWatch logs
 Amazon CloudWatch logs enable you to store, monitor, and access files from AWS
resources like Amazon EC2 instances, Route53, etc.

 It also helps you to troubleshoot your system errors and maintain the logs in highly durable
storage.

 It also creates log of information about the DNS queries that Route 53 receives.

Getting started with Amazon CloudWatch

Notifying gfg website management team when the instance on which gfg website is hosted stops
Whenever the CPU utilization of instance (on which GeeksForGeeks website is hosted ) goes
above 80%, CloudWatch event is triggered. This CloudWatch event then activates the SNS topic
which sends the alert email to the attached gfg subscribers.

Use Cases for CloudWatch

 CloudWatch can be used to monitor the performance of AWS resources, applications, and
infrastructure components in real-time

 CloudWatch allows users to set up alarms that trigger notifications or automated actions in
response to changes in the state of their resources.

 CloudWatch can be used to store, search, and analyze log data from various AWS services,
applications, and infrastructure components.

 CloudWatch can be used to monitor the performance of EC2 instances, RDS databases,
and other resources, which can then be used to trigger automatic scaling events.

Benefits of Amazon CloudWatch

 A large amount of data is produced by web applications nowadays so amazon CloudWatch


acts as a dashboard that contains the organized collection of whole data.

 It improves the total cost of ownership by providing alarms and also takes automated
actions when there is an error in limits provided.

 Applications and resources can be optimized by examining the logs and metric data.

 Detailed Insights from the application are provided through data like CPU utilization,
capacity utilization, memory utilization, etc.

 It provides a great platform to compare and contrast the data produced by various AWS
services.

Draw Backs of Amazon CloudWatch


 Cloud Watch can be expensive, especially for large-scale monitoring and logging needs.

 Cloud Watch may not be able to handle large amounts of log data, especially during spikes
in usage, making it difficult to maintain a consistent level of monitoring and logging.

 The monitoring and logging processes of CloudWatch can consume significant system
resources, impacting the overall performance of an application.

 Integrating CloudWatch with other AWS services and third-party tools can be challenging.

 Setting up and managing CloudWatch can be complex, especially for users who are not
familiar with cloud-based systems.

Challenges of CloudWatch

 Complexity in Setup: Setting up CloudWatch monitoring and configuring alarms can be


challenging, especially for users who are new to AWS. Understanding which metrics to
monitor and how to interpret them effectively requires familiarity with AWS services and
best practices.

 Limited Visibility and Granularity: CloudWatch provides metrics and logs at a high level,
which may lack the granularity needed for detailed analysis and troubleshooting. Users
may encounter difficulty in pinpointing the root cause of issues due to limited visibility
into specific system components or resources.

 Cost Management: CloudWatch costs can accumulate, particularly when monitoring a


large number of resources or enabling detailed logging and retention settings. Users need
to carefully manage and optimize their CloudWatch configurations to avoid unexpected
charges while ensuring adequate monitoring coverage.

Amazon CloudWatch Pricing

Amazon cloud watch offers different pricing as following.

 Free Tier: Amazon cloud watch offers free tier up to 7 metrics, 3 alarms and 500 custom
dashboards per month and log storage up to 5 Gb per month.

 Pay-as-you-go: You will be charged according to the base charge like each metric had its
base charge and log will be charged based on per gb for dashboard you will be charged
according to the per dash board. You will basically charged according to how much you
use.

CloudWatch Metrics
1. Introduction

Amazon CloudWatch Metrics are time-ordered sets of data points that represent the
performance of your AWS resources, applications, or custom systems.
They’re essentially numeric measurements collected over time, used for monitoring, alerting, and
operational insights.

For example:

 CPU utilization of an EC2 instance


 Number of requests to an API Gateway
 Latency of a Lambda function

Each metric is stored in CloudWatch with:

 Namespace → A container for related metrics (e.g., AWS/EC2).


 Metric Name → The specific measurement (e.g., CPUUtilization).
 Dimensions → Name–value pairs for filtering (e.g., InstanceId=i-
1234567890abcdef0).
 Timestamp → When the metric data was recorded.
 Value → The measurement itself.

2. Key Properties

Property Description
Namespace Logical container for metrics (default AWS services or custom namespaces).
Metric Name Describes what is being measured.
Dimensions Attributes to uniquely identify a metric (up to 30 per metric).
Datapoints Individual time-stamped measurements.
Unit Standard measurement unit (e.g., Seconds, Bytes, Percent).

3. Types of Metrics

1. AWS-Provided Metrics
o Automatically published by AWS services.
o Example: EC2 → CPUUtilization, DiskReadOps, NetworkIn.
2. Custom Metrics
o Created by you using PutMetricData API or SDK.
o Useful for application-level monitoring (e.g., number of active users).
3. High-Resolution Metrics
o Granularity of 1 second (instead of default 1 minute).
o Useful for fast-changing systems like real-time gaming or high-frequency trading.
4. Metric Granularity & Retention
CloudWatch stores metrics at different resolutions:

Resolution Retention Period


1 second 3 hours
1 minute 15 days
5 minutes 63 days
1 hour 455 days (~15 months)

5. Using Metrics
 Dashboards → Visualize metrics in real-time.
 Alarms → Set thresholds for metrics to trigger actions.
 Logs & Insights → Combine metric trends with log data.
 Anomaly Detection → Automatic learning of normal patterns and detecting deviations.

6. Best Practices

 Use consistent namespaces for organization.


 Add meaningful dimensions to slice and filter data easily.
 For critical systems, enable high-resolution metrics for better responsiveness.
 Combine metrics with CloudWatch Alarms to trigger automated recovery (via SNS,
Lambda, etc.).

CloudWatch Alarms

Amazon CloudWatch Alarms are a core AWS monitoring feature that automatically track and
respond to changes in your resources’ metrics. They let you define specific thresholds for
performance or operational metrics, and when those thresholds are breached, CloudWatch can
notify you or trigger automated actions.

1. Purpose of CloudWatch Alarms

CloudWatch Alarms are used to:

 Monitor AWS resources and custom metrics (e.g., CPU utilization, latency, error
rates).
 Alert administrators via Amazon Simple Notification Service (SNS), email, or SMS
when thresholds are crossed.
 Trigger automated actions like scaling EC2 instances up or down, restarting services, or
running AWS Lambda functions.

2. Core Components

A CloudWatch Alarm configuration consists of:

1. Metric
o The data you want to monitor (e.g., CPUUtilization for an EC2 instance,
Invocations for a Lambda function).
o Can be AWS-provided metrics or custom metrics you publish.
2. Statistic or Math Expression
o Determines how the metric values are aggregated (e.g., Average, Sum, Maximum,
Minimum, p90 percentile).
3. Period
o The length of time over which the metric data is aggregated (e.g., 60 seconds, 5
minutes).
4. Threshold
o The value that triggers the alarm if crossed.
5. Comparison Operator
o How the metric is compared to the threshold:
 GreaterThanThreshold
 GreaterThanOrEqualToThreshold
 LessThanThreshold
 LessThanOrEqualToThreshold
 EqualToThreshold
6. Evaluation Periods
o Number of consecutive periods that must breach the threshold before the alarm
changes state.

3. Alarm States

CloudWatch Alarms can be in one of three states:

 OK – The metric is within the defined threshold.


 ALARM – The metric has breached the threshold for the required evaluation periods.
 INSUFFICIENT_DATA – Not enough data is available to determine the state.

4. Actions Triggered by Alarms

CloudWatch alarms can:

 Send notifications via Amazon SNS (email, SMS, HTTP endpoints).


 Trigger Auto Scaling actions to maintain application availability and performance.
 Stop, start, or terminate EC2 instances automatically.
 Invoke AWS Lambda functions for custom responses.
5. Types of CloudWatch Alarms

1. Metric Alarms
o Monitor a single metric or a math expression based on multiple metrics.
2. Composite Alarms
o Combine multiple alarms into a single one.
o Uses Boolean logic (AND/OR) to reduce noise from multiple alerts.

6. Example Use Cases

 EC2 Monitoring: Alarm if CPU utilization is > 80% for 5 minutes.


 Billing Alarm: Trigger if AWS bill exceeds $100.
 S3 Storage Monitoring: Alarm if bucket size crosses a threshold.
 Custom App Metrics: Alarm if API error rate exceeds 5% for 3 consecutive minutes.

7. Example: Setting a CloudWatch Alarm (EC2 CPU Utilization)

Scenario: Trigger an alarm if CPU utilization of an EC2 instance exceeds 80% for 5 minutes.

1. Metric: CPUUtilization
2. Statistic: Average
3. Period: 60 seconds
4. Threshold: 80%
5. Evaluation Periods: 5
6. Action: Send SNS notification and trigger Auto Scaling.

8. Best Practices

 Use Composite Alarms to minimize false positives.


 Tune evaluation periods to balance responsiveness and stability.
 Tag alarms for better management.
 Integrate with Incident Management Tools like PagerDuty or Opsgenie.
 Test alarm actions before production deployment.

AWS CloudTrail
AWS CloudTrail is a service that enables governance, compliance, and operational and risk
auditing of your AWS account. It records and logs every API call made on your AWS account,
capturing details such as the identity of the API caller, the time of the API call, the source IP
address, the request parameters, and the response elements returned by the AWS service. This
comprehensive logging allows you to track changes and activities across your AWS
infrastructure, helping with security analysis, resource change tracking, troubleshooting, and
meeting compliance requirements.

CloudTrail provides three ways to record events:


 Event History: Your AWS account has Cloud Trail activated by default, and you have
immediate access to the Cloud Trail event history. A viewable, searchable, printable, and
immutable record of the last 90 days' worth of management events in an AWS Region is
available in the Event history. The AWS Management Console, AWS Command Line
Interface, and AWS SDKs and APIs are all used to perform the activities that these events
record. The AWS Region where the event occurred is documented in the Event history.
The Event history can be seen for free on Cloud Trail.

 Cloud Trail Lake: A managed data lake called AWS Cloud Trail Lake is used to record,
store, access, and analyze user and API activity on AWS for audit and security reasons.
Existing events in row-based json format are converted to Apache ORC format by Cloud
Trail Lake. A columnar storage format called ORC is designed for quick data retrieval.
Event data stores, which are immutable collections of events based on criteria you choose
by using sophisticated event selectors, aggregate events into immutable collections. The
event data can be kept in an event data storage for a maximum of seven years (2557 days).
Using AWS Organizations, you may construct an event data store for a single AWS
account or for a number of AWS accounts. Any Cloud Trail logs that you currently have
can be imported into an existing or new event data store from your S3 buckets. With Lake
dashboards, you can also see the top Cloud Trail event trends. See Creating event data
storage and working with AWS Cloud Trail Lake for further details.

 Trails: In addition to delivering and storing events in an Amazon S3 bucket, Trails can also
deliver events to Amazon Cloud Watch Logs and the Amazon Event Bridge. These
occurrences can be entered into your security monitoring programs. You may also search
and examine your Cloud Trail logs using custom third-party programs or programs like
Amazon Athena.

Using AWS Organizations, you can build trails for a single AWS account or for a number of AWS
accounts. Your management events can be analyzed for unusual behavior in API call volumes and
error rates by logging Insights events. See Creating a trail for your AWS account for further details.

AWS CloudTrail Architecture

AWS Account is created in the AWS environment in the diagram above. When a new account is
created, Cloud Trail is activated. An API call is made in the Back End whenever we carry out any
operation using an AWS account, such as signing in, creating and deleting EC2 instances, creating
S3 buckets, and uploading data into them. An API request is made on the backend when the activity
occurs.

The activities that we carry out with our AWS Account can be carried out in a variety of ways. For
instance, we can use the account with the aid of the AWS CLI (AWS - Command-line Interface),
and we can also carry out the activity using the SDK (Software Development Kit) or AWS
Management Console. We may use any method here, and by using that method, whenever we
execute an activity from the account, the backend API is called. When the backend API is called,
an event is generated, and the event log is saved in the Cloud Trail. Only when we carry out any
activity using an AWS Account does an event get created in Cloud Trail.
The AWS account activity we perform lasts for 90 days in the same place. It is possible to keep
event logs in an S3 bucket for longer than 90 days. SNS notification (Simple Notification
Service) configuration is also possible in Cloud Trail.

Benefits of using AWS CloudTrail in AWS

 CloudTrail log file: The log file integrity validation is a tool you may use to help with IT
security and auditing procedures.

 Security and Compliance: Meeting security and compliance standards is made easier with
CloudTrail. It supports security incident investigation and compliance audits by assisting
enterprises in identifying illegal or suspicious activity through the monitoring
of AWS actions.

 Resource Change Tracking: AWS resource changes over time can be tracked with
CloudTrail. This helps with resource management and troubleshooting by helping to spot
configuration changes, authorization changes, and resource removals.

 Alerting and Notifications: Businesses can configure alerts and notifications for a variety
of events that are logged in CloudTrail logs. The prompt response to urgent situations is
made possible by this proactive monitoring.

 Cross-Account and Multi-Region Support: Multi-account logging is supported by


CloudTrail, enabling businesses to centralize logging for numerous AWS accounts.
Additionally, it offers multi-region logging, which consolidates logs from various AWS
regions in one place for centralized analysis. Enables your account's governance,
compliance, and auditing. Aids in constant monitoring and security analysis simple to
manage and access.
How does AWS CloudTrail Work?

Your Amazon Web Services (AWS) account's activity is tracked and recorded by the AWS
CloudTrail service. It offers thorough logs of all API calls and operations made on your AWS
resources. This is how AWS CloudTrail functions:

 Data Collection: Activity in your AWS account is regularly monitored by CloudTrail. An


API call is created whenever an AWS service or resource is used or updated.

 Log Storage: You can define an Amazon S3 bucket where these log entries will be gathered
and stored. For your CloudTrail logs, you may set the bucket's location and retention time.

 Access Control: Policies set forth by AWS Identity and Access Management
(IAM) govern who has access to CloudTrail logs. Who is permitted to read, write, or
administer CloudTrail logs can be specified.

 Alerting and Notifications: You can configure in-the-moment alerts based on particular
occurrences or trends in your CloudTrail logs using CloudWatch Alarms. This enables you
to react rapidly to operational or security incidents.

 Log Generation: Each time an API is called, CloudTrail creates a log entry with
information on the caller, the action taken, the resource used, and the timestamp.

AWS CloudTrail features

 Comprehensive Logging: Captures detailed logs of API calls and activities across AWS
services, providing visibility into actions taken by users, applications, or AWS services.

 Audit and Compliance: Facilitates compliance auditing by tracking changes to resources


and enabling forensic analysis of security incidents through comprehensive logging.

 Integration with AWS Services: Integrates seamlessly with other AWS services like AWS
Lambda, S3, CloudWatch Logs, and CloudWatch Events for advanced monitoring and
automated responses to events.

 Multi-Account and Multi-Region Support: Supports logging and centralized management


across multiple AWS accounts and regions, providing a unified view of activity across
complex AWS environments.

 Event History and Insights: Provides event history timelines and insights into API activity
trends, enabling operational troubleshooting, security analysis, and operational
intelligence.

Steps to set up AWS CloudTrail

Step 1: Login to AWS Console


 Visit AWS Academy and login to your account.

Step 2: Access AWS Academy Learner Lab

 Navigate to AWS Academy Learner Lab [52156] -> Modules.

Step 3: Launch AWS Academy Learner Lab

 Start the lab session and proceed to AWSgreen dot.

 Then click on AWSgreen dot.

Step 4: Open CloudTrail Service

 Click on Services and search for "CloudTrail".

Step 5: Create CloudTrail

 Select "Create CloudTrail", name it as "MyTrail".

Step 6: Edit Storage Location

 Click on the created "MyTrail" and edit the storage location. Choose "Create new S3
bucket" and save changes.

Step 7: Save Changes

 Confirm and save changes to finalize the S3 bucket configuration.

Step 8: Confirm Settings

 Ensure data events are configured to deliver to the AWS CloudTrail console, Amazon S3
buckets, and optionally Amazon CloudWatch Logs.

Step 9: Monitor Data Events

 Data events are automatically stored in the designated S3 bucket.

Step 10: Access and Review Event Data

 Navigate to the S3 bucket, locate the first file, download it, and review the JSON formatted
data events.

Accessing CloudTrail

Accessing AWS CloudTrail Using These Methods:


 AWS Management Console: Access via web browser, navigate to CloudTrail service,
configure trails, view logs, and perform basic analysis.

 AWS CLI: Use commands like aws cloudtrail create-trail, aws cloudtrail describe-trails,
and aws cloudtrail lookup-events to manage trails, retrieve event history, and perform
automated tasks.

 AWS SDKs: Integrate CloudTrail into your applications using SDK functions to
programmatically manage trails, retrieve and process event data, and incorporate
CloudTrail insights into application logic.

 AWS CloudTrail API: Develop custom applications or scripts that interact directly with
CloudTrail API endpoints to automate tasks, perform complex queries, and integrate
CloudTrail data into external systems or reporting tools.

AWS CloudTrail Use cases

 Security and Compliance Monitoring: Monitor API calls and actions across AWS services
to detect unauthorized access, changes to resources, and potential security breaches.
CloudTrail logs provide detailed visibility for compliance audits and regulatory
requirements.

 Operational Troubleshooting: Investigate operational issues by reviewing CloudTrail logs


to understand the sequence of events leading to errors or unexpected behavior in your AWS
environment. Helps in identifying root causes and improving system reliability.

 Change Management and Auditing: Track changes made to AWS resources over time,
including configuration changes, deployments, and updates. CloudTrail logs enable
auditing of resource history, aiding in change management and maintaining configuration
integrity.

 Incident Response and Forensics: Use CloudTrail logs during incident response to
reconstruct events, analyze the scope of an incident, and identify impacted resources.
Facilitates forensic investigation and timely resolution of security or operational incidents.

 Governance and Accountability: Establish accountability by logging actions performed by


users, applications, or AWS services. CloudTrail provides a trail of actions taken, helping
organizations enforce governance policies and maintain accountability across AWS
accounts.

AWS Config
AWS Config is a service provided by Amazon Web Services (AWS) that empowers you to
evaluate, review, and assess the configurations of your AWS resources. It persistently monitors
and records the configuration changes that happen inside your AWS environment, giving insights
into resource configuration history and encouraging compliance, security, and operational best
practices.

AWS Config makes a difference in keeping control and perceivability over your AWS
infrastructure by following changes to resource configurations over time. It captures points of
interest such as configurations changes, connections between resources, and the in general state of
your environment.

By leveraging AWS Config organizations can ensure that their AWS resources comply with inside
policies, industry regulations, and security benchmarks. It moreover makes a difference identify
unauthorized changes, evaluate compliance with wanted setups, and remediate non-compliant
resources. AWS Config improves perceivability, control, and administration of your AWS
environment.

AWS Config Concepts

 Configuration Items: These are the resources that AWS Config monitors. They incorporate
metadata, for example, asset type, ID, design, and connections.

 Config Rules: Rules that you characterize to implement wanted designs or consistence
necessities. AWS Config considers these guidelines in contrast to setup changes and reports
consistence status.

 Recorder for Config: A service that records designs of upheld assets in your AWS account.

 History of Configurations: a timeline of changes to your account's resources' configuration.

 Configuration Snapshot: A specific moment perspective on the configuration of resources


in your account.

 Delivery Channel: Indicates where AWS Config sends configuration change notices, for
example, Amazon S3 buckets or Amazon SNS topics.

How AWS Config Work?

The configurations of your AWS resources are continuously monitored and stored in a centralized
repository by AWS Config. Here is a brief outline of how it works

 Recording of Configurations: The configurations of your account's supported AWS


resources are captured and recorded by AWS Config. This incorporates subtleties, for
example, resource credits, connections, and configuration history.

 Tracking of Change: It tracks changes to resource designs over the long run, distinguishing
when alterations happen, what was changed, and who rolled out the improvement. This
gives a far reaching review trail of setup changes.
 Rule Evaluation: AWS Config allows you to define rules to implement wanted
arrangements or consistence necessities. AWS Config provides status updates on
compliance after these rules are compared to modifications to the configuration.

 Notifications and Alerts: AWS Config either integrates with AWS Lambda for custom
response actions or sends notifications via Amazon SNS (Simple Notification Service)
whenever a configuration change breaks a defined rule or sets off an alert condition.

 Centralized Management: AWS Config gives you a centralized look at your AWS
environment, making it possible to look at trends in configuration, fix problems, and keep
your resources in compliance.

 Automation: Through incorporation with AWS Lambda, you can automate responses to
arrangement changes in view of predefined rules, you can use this to enforce policies, fix
problems, or take action when something happens.

Step-By-Step Process to Setup AWS Config

Step 1: Login to AWS Console

 Now login to AWS Console by using your credentials or create new account

Step 2: Navigate to AWS Config

 In AWS dashboard search for AWS Config and click on it

Step 3: Set up AWS Config

AWS Config provides a detailed view of the resources associated with your AWS account,
including how they are configured, how they are related to one another, and how the configurations
and their relationships have changed over time

 Now click on Get Started

Recording method: In Recording strategy, Customize AWS Config to record configuration


changes for all supported resource types, or for only the supported resource types that are relevant
to you. Globally recorded resources (RDS global clusters and IAM users, groups, roles, and
customer managed policies) may be recorded in more than this Region.

In this Recording strategy they shows a two options

 All resource types with customizable overrides

 Specific resource types

 In Default settings, there is an option Recording frequency


 Continuous recording: Record configuration changes continuously whenever a change
occurs.

 Daily recording: Receive configuration data once every day only if a change has occurred.

Step 4: IAM role for AWS Config

Now choose IAM Role for AWS Config

 Use an existing AWS Config service-linked role

 Choose a role from your account

Step 5: Delivery method

Now choose delivery method as shown in the image below.

Amazon S3 bucket

 Create a bucket

 Choose a bucket from your account

 Choose a bucket from another account

Here i am choosing Create a bucket and Provide Name to Bucket

 Click on Next

 Review and Confirm

Step 6: Verify

 Now go to AWS S3 console and verify that S3 bucket created or not

 Here we see S3 bucket was created successfully

 In S3 Bucket we can check objects file present, regarding to AWS Logs

 Here we see Config files

 Now go to AWS Config Dashboard and check AWS Config usage metrics

 Here we see that AWS Config was successfully created

Benefits of AWS Config


 Continuous Monitoring: AWS Config ceaselessly monitors your AWS assets, giving
ongoing perceivability into their arrangements.

 Tracking of Change: It tracks changes to resource setups over the long run, assisting you
with understanding who rolled out the improvement, when it happened, and what was
changed.

 Assurance of Compliance: By allowing you to define and enforce compliance rules, you
can ensure that your AWS environment adheres to industry standards, best practices, and
internal policies.

 Security Enhancement: By distinguishing unapproved setup changes and giving alarms,


AWS Config improves the security of your AWS environment.

 Troubleshooting and Auditing: By providing a comprehensive history of resource


configuration changes, it facilitates audits, investigations, and problem resolution and
makes troubleshooting simpler

 Automation Support: Because AWS Config works with AWS Lambda you can automate
responses to changes to the configuration based on predefined rules.

AWS Config Pricing

AWS Config pricing depends on two fundamental factors: the number of configuration things
recorded and the quantity of dynamic Config rules.

 Configuration Items: AWS Config charges based of the quantity of configuration things
recorded. Setup things address the resources in your AWS account that AWS Config
monitors and tracks designs for, for example, EC2 instances, S3 buckets, IAM roles, and
so on.

 Active Config Rules: AWS Config likewise charges in based of the quantity of dynamic
Config rules you have conveyed. Config rules are utilized to define desired arrangements
or consistence prerequisites for your AWS resources. The pricing is determined by the
quantity of decides that are effectively assessing resources setups.

AWS Config vs CloudTrail

FEATURES AWS CONFIG CLOUD TRAIL

AWS Config mainly focus on


AWS Cloud Trail focus on Logging
configuration management and
and auditing
Focus monitoring
FEATURES AWS CONFIG CLOUD TRAIL

It continuously monitors resources, It records API calls made with AWS


track changes and provide automation account, provides logs, API activity
Functionality through AWS Lambda and troubleshooting

Configuration Management, Auditung, Security analysis,


Monitoring , Security and track monitoring, troubleshooting and
Use Cases changes resources tracking

Can be use AWS services to integrate


Can we use AWS CloudTrail logs as
with others services for analysis and
a data source
Integration automation purpose

AWS Systems Manager


AWS Systems Manager (SSM) is like AWS’s Swiss Army knife for infrastructure
management — it gives you a unified way to manage, monitor, and automate tasks across your
AWS resources (and even on-premises servers) without constantly logging into individual
instances.

1. Overview

AWS Systems Manager is a service that provides centralized operational control for your
AWS resources.
It allows you to:

 View operational data from multiple AWS services.


 Automate tasks such as patching, software deployment, and configuration updates.
 Remotely access instances securely without SSH keys.
 Manage both AWS and hybrid environments.

Think of it as a control tower for your EC2 instances, RDS databases, S3 buckets, and even on-
premise servers.

2. Key Features
a) Session Manager

 Provides browser-based or CLI-based shell access to EC2 instances and on-prem


servers.
 No need for SSH or inbound ports — secure connections via AWS IAM.
 Session logs can be stored in Amazon S3 or CloudWatch Logs.
b) Run Command

 Execute commands across multiple instances at once without logging in.


 Useful for installing software, restarting services, or gathering logs.
 Supports parameterized automation.

c) Patch Manager

 Automates patching for OS and applications.


 Supports Windows, Amazon Linux, RHEL, Ubuntu, etc.
 Allows maintenance windows to avoid downtime during business hours.

d) Automation

 Predefined automation documents (SSM Documents) for tasks like:


o Creating AMIs
o Restarting services
o Managing backups
 Can also create custom automations using JSON/YAML.

e) Parameter Store

 Central place to store configuration data and secrets (like database connection strings).
 Supports plain text and encrypted values (integrates with AWS KMS).
 Useful for application configuration management.

f) Inventory

 Collects metadata about your instances (OS version, installed software, applications,
etc.).
 Helps with compliance and audit reporting.

g) State Manager

 Ensures that your instances are always in a desired state.


 Example: Automatically install antivirus if missing or enforce a certain OS configuration.

h) OpsCenter

 Provides a single dashboard for operational issues and incidents.


 Integrates with AWS CloudWatch, AWS Config, and other monitoring tools.

3. How It Works

1. SSM Agent must be installed and running on managed instances.


o
Amazon Linux and Windows AMIs come with it pre-installed.
2. IAM Roles and Policies are required for Systems Manager to access resources.
3. Management Actions (Run Command, Automation, Patch, etc.) are executed via SSM
Agent.
4. Results are logged to CloudWatch Logs or S3.

4. Benefits

 No SSH key management – Secure IAM-based access.


 Scalability – Manage hundreds or thousands of instances at once.
 Compliance – Patch and configuration automation reduces human error.
 Hybrid Support – Works for both AWS and on-prem resources.
 Audit Trails – Full logging of commands and sessions.

5. Common Use Cases

 Patch OS across hundreds of servers in one go.


 Store and retrieve sensitive credentials without hardcoding them.
 Enforce software compliance policies.
 Run scripts on all EC2 instances at once.
 Provide developers with secure, audited instance access.

6. Pricing

 Free tier for basic functionality (Run Command, Session Manager, Parameter Store
standard parameters).
 Costs apply for:
o Advanced parameters in Parameter Store.
o Automation executions beyond free tier.
o Managed instance inventory collection.
o Session recording and data transfer.

AWS Elastic Load Balancer


The elastic load balancer is a service provided by Amazon in which the incoming traffic is
efficiently and automatically distributed across a group of backend servers in a manner that
increases speed and performance. It helps to improve the scalability of your application and secures
your applications. Load Balancer allows you to configure health checks for the registered targets.
In case any of the registered targets (Autoscaling group) fails the health check, the load balancer
will not route traffic to that unhealthy target. Thereby ensuring your application is highly available
and fault tolerant. To know more about load balancing refer to Load Balancing in Cloud
Computing.
Types of Load Balancers

1. Classic Load Balancer: It is the traditional form of load balancer which was used initially.
It distributes the traffic among the instances and is not intelligent enough to support host-
based routing or path-based routing. It ends up reducing efficiency and performance in
certain situations. It is operated on the connection level as well as the request level.
Classic Load Balancer is in between the transport layer (TCP/SSL) and the application
layer (HTTP/HTTPS).

2. Application Load Balancer: This type of Load Balancer is used when decisions are to be
made related to HTTP and HTTPS traffic routing. It supports path-based routing and host-
based routing. This load balancer works at the Application layer of the OSI Model. The
load balancer also supports dynamic host port mapping.

3. Network Load Balancer: This type of load balancer works at the transport layer(TCP/SSL)
of the OSI model. It’s capable of handling millions of requests per second. It is mainly
used for load-balancing TCP traffic.

4. Gateway Load Balancer: Gateway Load Balancers provide you the facility to deploy, scale,
and manage virtual appliances like firewalls. Gateway Load Balancers combine a
transparent network gateway and then distribute the traffic.

Steps to configure an Application load balancer in AWS

Step 1: Launch the two instances on the AWS management console named Instance A and Instance
B. Go to services and select the load balancer. To create AWS free tier account refer to Amazon
Web Services (AWS) – Free Tier Account Set up.

Step 2: Click on Create the load balancer.

Step 3: Select Application Load Balancer and click on Create.

Step 4: Here you are required to configure the load balancer. Write the name of the load balancer.
Choose the scheme as internet facing.
Step 5: Add at least 2 availability zones. Select us-east-1a and us-east-1b

Step 6: We don't need to do anything here. Click on Next: Configure Security Groups

Step 7: Select the default security group. Click on Next: Configure Routing

Step 8: Choose the name of the target group to be my target group. Click on Next: Register Targets.

Step 9: Choose instance A and instance B and click on Add to register. Click on Next: Review.

Step 10: Review all the configurations and click on create

Step 11: Congratulations!! You have successfully created a load balancer. Click on close.

Step 12: This highlighted part is the DNS name which when copied in the URL will host the
application and will distribute the incoming traffic efficiently between the two instances.

Step 13: This is the listener port 80 which listens to all the incoming requests

Step 14: This is the target group that we have created

Step 15: Now we need to delete the instance. Go to Actions -> Click on Delete.

Step 16: Also don't forget to terminate the instances.

Features of cloud

 No up-front investment
 Lowering operating cost
 Highly scalable and efficient
 Easy access
 Reducing business risks and maintenance expenses

Advantages of Elastic Load Balancer

 ELB automatically distributes incoming application traffic across multiple targets, such
as EC2 instances, containers, and IP addresses, to achieve high availability.

 It can automatically scale to handle changes in traffic demand, allowing you to maintain
consistent application performance.

 It can monitor the health of its registered targets and route traffic only to the healthy targets.

 It evenly distributes traffic across all availability zones in a region, improving fault
tolerance.
Disadvantages of Elastic Load Balancer

 ELB can add latency to your application, as traffic must pass through the load balancer
before being routed to your targets.

 It has limited customization options, so you may need to use additional tools and services
to fully meet your application's requirements.

 It can introduce additional complexity to your application architecture, requiring you to


manage and maintain additional resources.

 It can increase your overall AWS costs, especially if you have high traffic volumes or
require multiple load balancers.

Amazon EC2 Auto Scaling


Auto Scaling is a cloud computing feature that enables an application to automatically adjust
its resources, such as servers and compute instances, based on real-time demand. The goal is
to ensure sufficient resources for performance and availability, while optimizing costs by
scaling up or down as needed.

Scaling Amazon EC2 means you start with the resources you require at the time of starting
your service and build your architecture to automatically scale in or out, in response to the
changing demand. As a result, you only pay for the resources you utilize. You don't have to be
concerned about running out of computational power to satisfy your consumer's demand.

Benefits of Auto Scaling


 Dynamical Scaling: AWS auto-scaling service doesn't require any type of manual
intervention it will automatically scale the application down and up by depending up on
the incoming traffic.

 Pay For You Use: In auto scaling the resource will be utilised in the optimised way where
the demand is low the resource utilisation will be low and the demand will high the resource
utilisation will increase so the AWS is going to charge you only for the amount of resources
you really used.

 Automatic Performance Maintenance: AWS auto scaling maintains the optimal application
performance with considering the workloads it will ensures that the application is running
to desired level which will decrease the latency and also the capacity will be increased by
based on your application.

Example: Here it involves a simple web application that helps employees locate conference
rooms for virtual meetings. In this scenario, the app sees light usage at the start and end of the
week. However, as more employees book meetings midweek, the demand for the application
rises during that period. The graph below shows the usage of the application’s capacity over a
week:

You can prepare for fluctuating capacity by provisioning enough servers to handle peak traffic,
guaranteeing the application always meets demand. However, this approach often leads to
excess capacity on slower days, which raises the overall operating costs. Alternatively, you
could allocate resources based on average demand, which reduces costs by avoiding
unnecessary equipment for occasional spikes. However, this might negatively impact user
experience when demand surpasses available capacity. EC2 Auto Scaling addresses this
problem by automatically adding instances as demand increases and removing them when no
longer needed. It uses EC2 instances, allowing you to pay only for what you actually use,
resulting in a more cost-efficient architecture that reduces unnecessary expenses.
Amazon EC2 Auto Scaling

 Amazon EC2 auto-scaling will helps you to scale the resources of EC2 depending on the
demand of incoming traffic. It will maintain the high availability and optimize the cost of
AWS EC2.

 EC2 Auto Scaling is will helps to create collection of EC2 instances called an Autoscaling
group where load balancer will transfer the load to this instances. The minimum, maximum
and preferred capacity for your Auto Scaling group can then be specified. To keep instances
running at the appropriate capacit EC2 Auto Scaling will start and stop them automatically.

 EC2 auto scaling will offers you to configure the policies where you mention the details
like at which percent of CPU utillizaion or memory usage you need to scale the instance
based on the demand. They can be scaled automatically based on the traffic to the demand.

Auto Scaling Components

 Groups: For scaling and managing the EC2 instances are grouped together so that they may
be thought of as a single logical entity. You can mention the minimum and maximum
number of EC2 instance are required based up on the demand of the incoming traffic.

 Configuration Templates: Configuration template or an launch template which is used by


the EC2 autoscaling group for the EC2 instance. In which you can specify the Amazon
Machine Image ID, keypair, security group and so on.

 Scaling Options: AWS Autoscaling provides number of options some of them are
mentioned as following.
o Dynamic scaling
o Predictive scaling
o Scheduled scaling
o Manual scaling

That's the point where Amazon EC2 Autoscaling comes into the picture. You may use Amazon
EC2 Auto Scaling in order to add or delete Amazon EC2 instances with respect to changes in
your application demand. You can maintain a higher feeling of application availability by
dynamically scaling your instances in and out as needed.

Features of AWS Auto Scaling

Here are the some most important features of Aws Auto scaling

 Dynamic Scaling: Adapts to changing environments and responds with the EC2 instances
as per the demand. It helps the user to follow the demand curve for the application, which
ultimately helps the maintainer/user to scale the instances ahead of time. Target tracking
scaling policies, for example, may be used to choose a loaded statistic for your application,
such as CPU use. Alternatively, you might use Application Load Balancer's new "Request
Count Per Target" measure, which is a load balancing option for the Elastic Load Balancing
service. After that, Amazon EC2 Auto Scaling will modify the number of EC2 instances
as needed to keep you on track.

 Load Balancing: Load balancing involves distributing incoming traffic across multiple
instances to improve performance and availability. Amazon Elastic Load Balancing
(ELB) is a service that automatically distributes incoming traffic across multiple instances
in one or more Availability Zones.

 Multi-Availability Zone Deployment: Multi-Availability Zone (AZ) deployment involves


launching instances in multiple AZs to improve availability and fault tolerance. Amazon
EC2 Auto Scaling can be used to automatically launch instances in additional AZs to
maintain availability in case of an AZ outage.
 Containerization: Containerization involves using containers to package and deploy
applications, making them more portable and easier to manage. Amazon Elastic Container
Service (ECS) is a service that makes it easy to run, stop, and manage Docker containers
on a cluster of EC2 instances.

Computing power is a programmed resource in the cloud, so you may take a more flexible
approach to scale your applications. When you add Amazon EC2 Auto Scaling to an
application, you may create new instances as needed and terminate them when they're no
longer in use. In this way, you only pay for the instances you use, when they're in use.

Types of AWS (Amazon Web Services) Autoscaling

 Horizontal Scaling: Horizontal scaling involves adding more instances to your application
to handle increased demand. This can be done manually by launching additional instances,
or automatically using Amazon EC2 Auto Scaling, which monitors your application's
workload and adds or removes instances based on predefined rules.

 Vertical Scaling: Vertical scaling involves increasing the resources of existing instances,
such as CPU, memory, or storage. This can be done manually by resizing instances, or
automatically using Amazon EC2 Auto Scaling with launch configurations that specify
instance sizes based on the workload.

 Reactive Scaling: Reactive Scaling responds to changes in demand as they occur by adding
or removing instances based on predefined thresholds. This type of scaling reacts to real-
time changes, such as sudden spikes in traffic, by scaling the application accordingly.
However, it is not predictive, meaning the system adjusts only when demand changes are
detected.

 Target Tracking Scaling: Target Tracking Scaling adjusts the number of instances in your
Auto Scaling group to maintain a specific metric at a target value. For example, you can
set a target for the average CPU utilization, and Auto Scaling will automatically add or
remove instances to keep the metric at the defined level.

 Predictive Scaling: Helps you to schedule the right number of EC2 instances based on the
predicted demand. You can use both dynamic and predictive scaling approaches together
for faster scaling of the application. Predictive Scaling forecasts future traffic and allocates
the appropriate number of EC2 instances ahead of time. Machine learning algorithms in
Predictive Scaling identify changes in daily and weekly patterns and automatically update
projections. In this way, the need to manually scale the instances on particular days is
relieved.

 Scheduled Scaling: As the name suggests allows you to scale your application based on the
scheduled time you set. For example, A coffee shop owner may employ more baristas on
weekends because of the increased demand and frees them on weekdays because of
reduced demand.
Limitations of AWS EC2 Autoscaling

There are several limitations to consider when using Amazon EC2 Auto Scaling:

 Number of instances: Amazon EC2 Auto Scaling can support a maximum of 500 instances
per Auto Scaling group.

 Instance health checks: Auto Scaling uses Amazon EC2 instance health checks to
determine the health of an instance. If an instance fails a health check, Auto Scaling will
terminate it and launch a new one. However, this process can take some time, which can
impact the availability of your application.

 Scaling policies: Auto Scaling allows you to set scaling policies based on CloudWatch
metrics, but these policies can be complex to configure and may not always scale your
application as expected.

 Application dependencies: If your application has dependencies on other resources or


services, such as a database or cache, it may not scale as expected if those resources become
overloaded or unavailable.

 Cost: Using Auto Scaling can increase the cost of running your application, as you may be
charged for the additional instances that are launched.
Overall, It's important to carefully consider the limitations of Amazon EC2 Auto Scaling and
how they may impact your application when deciding whether to use this service. To know the
difference between Auto scaling and load balancing refer to Auto Scaling vs Load Balancer.

AWS Autoscaling For EC2 (Elastic Cloud Computing)

Amazon EC2 Autoscaling provides the liberty to automatically scale the instances as per the
demand. Even if some problems are detected, the model replaces the unhealthy instances with
ones that are fully functional. To automate fleet management for EC2 instances, Amazon EC2
Auto Scaling will perform three major functions:

 Balancing the capacities across different Availability zones: If your application has three
availability zones, Amazon EC2 Autoscaling can help you balance the number of instances
across the three zones. As a result, each zone receives no more or fewer instances than the
others, resulting in a balanced distribution of traffic and burden.

 Replacing and Repairing unhealthy instances: If the instances fail to pass the health check,
Autoscaling replaces them with healthy instances. As a result, the problem of instances
crashing is reduced, and you won't have to manually verify their health or replace them if
they're determined to be unhealthy.

 Monitoring the health of instances: While the instances are running, Amazon EC2 Auto
Scaling ensures that they are healthy and that traffic is evenly allocated among them. It
does health checks on the instances on a regular basis to see if they're experiencing any
issues.

Use Cases of AWS (Amazon Web Services) AutoScaling

 Automatic Scaling: Application scaling can be done automatically based upon the
incoming traffic if the load is increased then the application will scale up and the load
decrease application will scale down automatically.
 Schedule Scaling: Based the data that previously available in at which particular point of
time there going to be peak point and at which time there going to be less traffic we can
schedule the auto scaling.

 Integration: You can integrate with other service in the AWS. Mainly the machine learning
which will helps to predict the incoming traffic and can scale according to the traffic.

Configuring AWS Auto Scaling Steps

Auto Scaling is an Amazon Web Service it allows instances to scale when traffic or CPU load
increases. Auto-scaling is a service that monitors all instances that are configured into the Auto
Scaling group and ensures that loads are balanced in all instances. Depending on the load
scaling group, increase the instance according to the configuration. When we created the auto-
scaling group, we configured the Desired capacity, Minimum capacity, maximum capacity,
and CPU utilization. If CPU utilization increases by 60% in all instances, one more instance is
created, and if CPU utilization decreases by 30% in all instances, one instance is terminated.
These are totally up to us what is our requirement. If any Instance fails due to any reason then
the Scaling group maintains the Desired capacity and starts another instance.

To know how to create autoscaling refer to Create and Configure the Auto Scaling Group in
EC2.

Amazon EC2 Auto Scaling Instance Lifecycle

Every EC2 instance within an auto scaling group follows a distinct lifecycle. This lifecycle
begins when the instance is launched and concludes with its termination. Below is an
illustration of the various stages an instance goes through during its lifecycle

Pricing for Amazon EC2 Auto Scaling


Amazon autoscaling is free of cost there is no additional fee for using Amazon EC2 Auto
Scaling. You will be charged only for the Amazon EC2 instances that you use. And also you
will be charged for the resources such as CloudWatch alarms and Elastic Load Balancers.

Pricing Component Cost

No additional cost for using Auto Scaling. You only pay for
Auto Scaling Service
the underlying resources (EC2 instances, etc.).

Amazon EC2 Billed based on the type of instance (e.g., On-Demand,


Instances Reserved, Spot). Pricing depends on instance type and region.

Amazon EC2 On- Starting at $0.0042 per hour (for t4g.micro, varies by instance
Demand Instances type and region).

Amazon EC2 Up to 72% savings compared to On-Demand, pricing based


Reserved Instances on 1 or 3-year terms.

Amazon EC2 Spot Up to 90% savings compared to On-Demand, prices fluctuate


Instances based on demand.

Charged per hour of load balancer usage and per GB of data


Amazon EC2 Elastic
processed (starts at $0.025 per hour and $0.008 per GB in the
Load Balancing
US East region).

Amazon CloudWatch Basic monitoring free, detailed monitoring starts at $0.01 per
(Monitoring) metric per month.

Data transfer in is free; data transfer out to the internet starts


Data Transfer
at $0.09 per GB.

First Elastic IP is free when associated with a running


Elastic IP Addresses
instance, $0.005 per additional IP per hour.

Scaling Plan

A blueprint for automatic Scale up or scale down of the your cloud resources in response to
incoming traffic is called a scaling plan. It will give the complete outlook of resources you
want to scale, the metrics you want to keep monitor, and the steps you want to take to scale
those resources when their metrics rise or fall below certain levels.Many cloud resources, such
as Amazon EC2 instances, Elastic Load Balancing (ELB) instances, and Amazon DynamoDB
tables, can be scaled up and down by using of scaling plans. They can also be used to expand
the resources of other cloud service providers, such Google Cloud Platform and Microsoft
Azure.

AWS CloudFormation
AWS CloudFormation is an Infrastructure as Code (IaC) offering that allows you to describe and
provision AWS infrastructure in a repeatable and automated way. You write CloudFormation
templates (in JSON or YAML) to specify the resources you require, like EC2 instances, S3
buckets, or RDS databases, and CloudFormation does the work of creating, managing, and
updating them for you.

With CloudFormation, you can treat your infrastructure as one unit, known as a stack, to be able
to easily replicate and have consistency between various AWS environments and regions. This
eliminates the requirement for manual configuration, which is prone to errors and takes time.

How Does AWS CloudFormation Work?

Amazon Web Services (AWS) is the service offered by the AWS cloud it is mainly used to
provision the service in the AWS like EC2, S3, Autoscaling, load balancing and so on you can
provision all the service automation with the Infrastructure as a code (IAC), instead of managing
all of them manually you can manage with the help of AWS Cloudformation.

Features Of AWS CloudFormation

1. No Upfront Investment

AWS CloudFormation operates on a pay-as-you-go model, meaning there is no need for large
upfront costs.

2. Lower Operating Costs


By automating infrastructure provisioning, CloudFormation helps reduce the time and resources
needed for manual management, lowering operational expenses.

3. Highly Scalable

Easily scale your infrastructure up or down according to your needs without the hassle of manual
intervention.

4. Easy Access

CloudFormation is integrated with the AWS Management Console, providing users with an
intuitive interface to manage resources.

5. Reduces Business Risks and Maintenance Expenses

Automation through CloudFormation ensures consistency across environments, reducing human


error and the cost of maintenance.

Use Cases Of AWS CloudFormation

1. Infrastructure Provisioning

You can automate provisioning of complex infrastructures in various environments using


CloudFormation. Defining your infrastructure in code helps you to duplicate your infrastructure
identically in other regions using one template.

2. Auto-Scaling Environments

You can create auto-scaling groups using CloudFormation so that your resources scale
automatically depending on load, with optimal performance and cost.

3. Multi-Region Deployments

With CloudFormation, you can provision resources in multiple regions so that your infrastructure
is disaster-resistant or resistant to a failure in a particular region.

4. CI/CD Pipeline Integration

CloudFormation supports integration with AWS Code Pipeline, Jenkins, and other CI/CD tools so
that you can automate the deployment of infrastructure as well as application code.

Benefits of AWS CloudFormation

1. Automation
AWS CloudFormation helps to automate the process of creating, configuring, and managing AWS
resources. This allows for the infrastructure to be deployed quickly, reliably, and repeatedly.

2. Consistency and standardization

With AWS CloudFormation, it is possible to create standard templates of infrastructure stacks that
can be used to create identical copies of the same infrastructure. This ensures consistency in the
infrastructure deployment and makes it easier to maintain.

3. Cost savings

AWS CloudFormation helps to reduce costs by allowing customers to use existing infrastructure
templates and reuse them across multiple environments. This reduces the cost of designing and
deploying new infrastructure.

4. Security

AWS CloudFormation helps to ensure that all AWS resources are configured securely by using
security policies and rules. This helps to protect the infrastructure from potential security threats.

5. Scalability

AWS CloudFormation allows for the quick and easy scaling of resources on demand. This means
that customers can quickly and easily add resources to meet their changing needs.

Amazon Web Services is a subsidiary of Amazon.com that provides on-demand cloud computing
platforms to individuals, companies, and governments, on a paid subscription basis.

Why Do We Need AWS CloudFormation?

Just imagine that you have to develop an application that uses various AWS resources. When it
comes to creating and managing those resources, it can be highly time-consuming and challenging.
It can become difficult for you to develop the application when you are spending the whole time
managing those AWS resources. What if we have a service for that? So here comes AWS
Cloudformation in the picture.

Getting Started with AWS CloudFormation

Our template is created in JSON or YAML script. We will be discussing the JSON script in this
article. JSON is a text-based format that represents structured data based onresource JavaScript
object syntax. It carries the AWS resources details in the structured format according to which
AWS infrastructure is created.

Structure of CloudFormation JSON Template

 Format version: It defines the version of a template.


 Description: Any extra description or comments about your template are written in the
description of the template.

 Metadata: It can be used to provide further information using JSON objects.

 Parameters: Parameters are used when you want to provide custom or dynamic values to
the stack during runtime. Therefore, we can customize templates using parameters.

 Mappings: Mapping in the JSON template helps you to map keys to a corresponding named
value that you specify in a conditional parameter.

 Conditions: Conditions are used to define if certain resources are created or when the
resource's properties are assigned to a value when the stack is created.

 Transform: Transform helps in reusing the template components by building a simple


declarative language for AWS CloudFormation.

 Resources: In this, you can specify the properties of AWS resources (AWS EC2 instance,
S3 bucket, AWS lambda ) you want in your stack.

 Output: The output defines the value which is generated as an output when you view
your cloud Formation stack properties.

CloudFormation Template Terms and Concepts

Understanding The Core Concepts That CloudFormation templates use to organize resources,
settings, and functions is key to managing AWS infrastructure efficiently.

1. Template

A CloudFormation template is a simply JSON or YAML file that defines the AWS resources to
be created and configured.

2. Stacks

Stacks are the totality of the resources which are contained within a CloudFormation Template.
When one wishes to roll out a template , they will be required to roll out a stack in which all the
available resources in that stack will be rolled out as one.

3. Formatting

JSON and YAML is used in handling templates within CloudFormation. YAML has been very
much opted for in terms of using due to forthcoming advantages in smaller templates and large
templates as well.

4. Change Sets
Change sets are used to examine what CloudFormation will do, in terms of modification or change
, to the deployed resources when an update operation is required on a specific stack. It ensures that
all the changes that are executed on the infrastructure do not create risks that were not intended.

5. Functions

CloudFormation comes with several built-in functions (like Fn::Sub, Fn::Join), and these functions
are aimed at making dynamic configuration easier so that as the resources are being deployed, their
properties can be adjusted and modified.

6. Parameters

These offer an opportunity for user interaction during the deployment of the stack thus making it
easier to create templates that are flexible and reusable. For instance, given the instance types,
VPC ids or environment variables can be indicated as parameters.

7. Conditions

Conditions enable or disable the creation of a resource depending on whether certain conditions
are true or false (for example, it may depend on the environments of production or development).
This allows for more complex template logic, deploying different resources based on provided
parameters.

How to Deploy a CloudFormation Template

Deploying a CloudFormation template can be done through multiple methods, each catering to
different preferences and workflows

1. AWS Management Console

The AWS Management Console offers a user-friendly way to deploy templates. Simply log in,
navigate to CloudFormation, and select "Create Stack." You can then upload your template (in
JSON or YAML format), configure parameters, tags, and permissions, and finalize by clicking
"Create Stack." This method is ideal for those who prefer a visual, straightforward interface.

2. CloudFormation Designer

For a more graphical approach, CloudFormation Designer allows users to visually build or modify
templates using a drag-and-drop interface within the AWS Console. After creating or adjusting
your template, deployment is just a click away by selecting "Create Stack." This method suits users
who enjoy visual tools for infrastructure design.

3. AWS CLI (Command Line Interface)


After ensuring the AWS CLI is installed and configured, you can deploy your template by running
a simple command. This method is particularly useful for developers who want to integrate
deployments into CI/CD pipelines or automate infrastructure tasks.

What Are AWS CloudFormation Hooks?

AWS CloudFormation Hooks is a powerful feature that helps ensure your CloudFormation
resources comply with your organization’s security, operational, and cost optimization standards.
CloudFormation Hooks allows you to implement custom code that proactively checks the
configuration of AWS resources before they are provisioned. If any resources are found to be non-
compliant, CloudFormation can either block the provisioning process or issue a warning while
allowing the process to continue, providing enhanced control over your infrastructure setup.

Benefits of Using CloudFormation Hooks

1. Automatic Compliance Checking

CloudFormation Hooks automatically verify that your resources meet your organization’s rules
and standards before deployment. By catching non-compliant resources early, it helps prevent
issues and ensures that only resources compliant with your policies are provisioned in your cloud
environment.

2. Personalized Checks

You have the flexibility to create custom checks tailored to your specific organizational needs.
These checks ensure resources adhere to your defined standards before they are deployed, giving
you full control over your cloud infrastructure.

3. Manage Resource Lifecycles

CloudFormation Hooks allows you to track and manage resources throughout their lifecycle,
ensuring they remain compliant with your rules and standards from provisioning to
decommissioning.

4. Cost Optimization

By enforcing guidelines that control resource usage, CloudFormation Hooks helps prevent
unnecessary spending and ensures cost-efficient infrastructure management. You can set rules to
limit the over-provisioning of resources, effectively controlling costs and optimizing spending.

5. Enhanced Security

CloudFormation Hooks adds an extra layer of security by enforcing strict security policies during
resource provisioning. This ensures that unauthorized or risky configurations are prevented,
thereby enhancing the overall protection of your cloud environment.
How to Create an AWS CloudFormation Template

There are two main ways to create an AWS CloudFormation template:

1. Use Pre-Built Templates

You have two options when using a pre-built template:

 Choose an Existing Template: You can select a previously created template and customize
it to fit your needs. This option allows you to modify the template to suit your current
requirements.

 Use a Sample Template: AWS provides several sample templates to help you get started.
You can choose one of these sample templates and modify it to deploy your infrastructure.
This article uses this approach. Once you select a sample template, you can customize it to
match your infrastructure setup.

2. Build Your Own Template from Scratch

If you prefer a more hands-on approach, you can use AWS Application Composer to visually
design your template. This tool offers a drag-and-drop interface, making it easier to configure
infrastructure components without needing to write code. It's a great choice if you want to build a
template visually and generate it automatically.

You might also like