Unit 4
Unit 4
Multi-AZ deployments
I recommend that they migrate their on-premises database to Amazon Relational Database Service
(Amazon RDS), and use a Multi-AZ deployment for high availability. In a Multi-AZ deployment,
Amazon RDS automatically creates a primary database (DB) instance and synchronously replicates
the data to an instance in a different Availability Zone. When it detects a failure, Amazon RDS
automatically fails over to a standby instance without manual intervention. This failover mechanism
meets the customer’s need to have a highly available database.
The following diagram shows a Multi-AZ deployment with one standby DB instance, and how it
works.
For even higher availability, the customer could explore deploying two standby DB instances, and
use three Availability Zones instead of two.
Say that you deploy MySQL or PostgreSQL databases in three Availability Zones by using Amazon
RDS Multi-AZ with two readable standbys. With this configuration, you can see automatic failovers
typically happen in under 35 seconds, and with a transaction-commit latency that can be up to two
times faster when compared to an Amazon RDS Multi-AZ deployment with one standby. You can
also gain additional read capacity, and a choice of AWS Graviton2–based or Intel–based instances
for compute.
The following diagram shows a Multi-AZ deployment with two standby DB instances, and how it
works.
Read replicas
Customers can also make RDS more highly available by using read replicas.
Amazon RDS read replicas provide enhanced performance and durability for Amazon RDS DB
instances. For read-heavy database workloads, read replicas make it easier to elastically scale out
beyond the capacity constraints of a single DB instance.
You can create one or more replicas of a given source DB instance and serve high-volume
application read traffic from multiple copies of your data, which increases aggregate read
throughput. Read replicas can also be promoted to become standalone DB instances, when needed.
Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, Microsoft
SQL Server, and Amazon Aurora.
For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS
creates a second DB instance by using a snapshot of the source DB instance. Amazon RDS then
uses the engine’s native asynchronous replication to update the read replica when there’s a change
to the source DB instance.
The read replica operates as a DB instance that allows only read-only connections. Applications
can connect to a read replica like they would connect to any DB instance. Amazon RDS replicates
all databases in the source DB instance.
Here’s an example of when to use a read replica. Say that you’re running reports on your database,
which is causing performance issues with CPU-intensive reads. You can use a read replica and
direct all the reporting queries to that replica instead of to the primary instance. Offloading some of
the intense queries to the replica should result in enhanced performance on the primary instance.
When you create an RDS DB instance, you choose a database instance type and size. Amazon RDS
provides a selection of instance types that are optimized to fit different use cases for relational
databases. Instance types comprise varying combinations of CPU, memory, storage, and
networking capacity. You have the flexibility to choose the appropriate mix of resources for your
database. Each instance type includes several instance sizes, which means that you can scale your
database to your target workload’s requirements.
Not every instance type is supported for every database engine, version, edition or Region.
When you want to scale your DB instance, you can vertically scale the instance and choose a larger
instance size. This might be the route you choose to take when you need more CPU and storage
ability for an instance.
If you need more CPU capabilities but don’t need more storage, you might choose to create read
replicas to offload some of the workload to a secondary instance.
Finally, here’s one last thing to consider. If you’re looking for better performance, consider using a
different storage type. For example, using Provisioned IOPS instead of General Purpose could give
you some of the performance enhancements that you want.
The following list briefly describes the three storage types:
General Purpose SSD: General Purpose SSD volumes offer cost-effective storage that works
well for a broad range of workloads. These volumes deliver single-digit millisecond latencies
and the ability to burst to 3,000 IOPS for extended periods of time. Baseline performance for
these volumes is determined by the volume’s size.
Provisioned IOPS: Provisioned IOPS storage is designed to meet the needs of I/O-intensive
workloads — particularly database workloads — that require low I/O latency and consistent
I/O throughput.
Magnetic: Amazon RDS also supports magnetic storage for backward compatibility. We
recommend that you use General Purpose SSD or Provisioned IOPS for any new storage needs.
The maximum amount of storage that’s allowed for DB instances on magnetic storage is less
than that of the other storage types.
Amazon Aurora
The Amazon Aurora is a relational database service offered from amazon cloud. This is one of the
widely used services for the data storage, for low latency and value-based data storage and
processing. Amazon Aurora is a MySQL and PostgreSQL-compatible relational database
fabricated for the cloud, that consolidates the performance and accessibility of traditional databases
with the simplicity and reliability of commercial databases at 1/10th the cost. It works with a
clustered approach with data replication in the AWS accessibility zone for efficient data
availability.
The Amazon Aurora is packed with high-performance subsystems. It is MySQL and PostgreSQL
engines which take advantage of fast distributed storage. Aurora provides speed and throughput
up to 5 times of MySQL and 3 times of PostgreSQL with the existing system. It bolsters, high
storage capacity, which can scale up to 64 Terabytes of database size for enterprise
implementation. Amazon Aurora is completely managed by Amazon Relational Database Service
(RDS), which automates tedious administration undertakings like hardware provisioning, database
arrangement, fixing, and reinforcements.
Amazon Aurora is built on top of an innovative distributed architecture that separates the storage
and compute layers of the database engine. The storage layer is distributed across multiple replicas,
while the compute layer runs on instances that are separate from the storage layer. This architecture
allows for automatic scaling of storage and compute resources independently and also provides
better fault tolerance and availability.
1. Availability and Durability: AWS Aurora has a feature of fault-tolerant and self-mending
storage built for the cloud. It offers an incredible accessibility of 99.99%. The storage in
the cloud replicates the 6 copies of the information across 3 Availability Zones. The AWS
Aurora backs up the data continuously due to the safety purpose and precaution from
storage failure.
2. Performance and Scalability: AWS Aurora provides 5 times the throughput of ordinary
MySQL. This performance is comparable with enterprise databases, at 1/10th the cost. The
user can scale database preparation up and down for smaller to larger instance varieties as
per the user needs. To scale scan capacity and performance, the user can add up to fifteen
low latency scan replicas across 3 convenience Zones. Amazon Aurora consequently
develops storage as required, up to 64TB per database instance.
3. Fully Managed: The Amazon Aurora is managed by Amazon Relational Database Service
(RDS). The user no longer needs to stress over database management tasks, for example,
hardware provisioning, software fixing, setup, configuration, or backups. Aurora
consequently and consistently screens and backs up the database to Amazon S3,
empowering granular point-in-time recuperation.
6. Compatibility with MySql and PostgreSQL: The Amazon Aurora database engine is
perfectly compatible with existing MySQL and PostgreSQL open supply databases, and
adds compatibility for new releases frequently. This means that the user can relocate
MySQL or PostgreSQL databases to Aurora using standard MySQL or PostgreSQL
import/export tools or previews. It also means the user using code, applications, drivers,
and tools with existing databases can also use it with Amazon Aurora with little or no
modification.
7. Cost: Amazon Aurora is designed to be cost-effective. You only pay for what you use, and
you can scale your database up or down as needed. Additionally, Aurora provides cost-
saving features such as automated storage optimization and the ability to pause or stop your
database when it's not in use.
Aurora database cluster comprises of Primary database and Aurora replica database and a cluster
volume to deal with the data for those database instances. Aurora cluster volume is certifiably not
a physical but a virtual database storage volume that ranges over various Availability Zones to
support worldwide applications better. Each zone has its duplicate of database cluster information.
The primary database is where all read and write operations are done over a cluster volume.
Each cluster in Aurora will have one primary database instance.
Its equitable and replica of the primary database instance whose sole responsibility is to
simply give information i.e., only read operations. There can be 15 replicas for a primary
database instance to maintain high accessibility and availability in all the Zones. In a fail-
safe condition, Aurora will switch to a replica when a Primary database is not available.
Replicas help in reducing the read workload over primary database.
There can be a multi-master cluster likewise for Aurora. In multi-master replication, all the
database instances would have a read and write capabilities. In AWS terminology they are
known as reader and writer database instances.
The user can configure to keep a backup of its database on Amazon S3. This ensures the
safety of the user's database even in the worst cases where the whole cluster is down.
For an unpredictable workload, user can use the Aurora Serverless to automatically start
scaling and shut down the database to match application demand.
In the Amazon RDS Shared Responsibility Model, AWS and the customer share duties to ensure
the security, availability, and performance of database services. AWS manages the infrastructure,
while customers take control of their data and database management.
AWS Responsibilities
Infrastructure and Hosting: AWS takes care of the foundational infrastructure for RDS,
including data centers, hardware along with networking. AWS ensures the physical
security of servers also network connectivity and operational aspects like power and
cooling.
Database Software and Patching: AWS manages the installation, maintenance, and
patching of the database engine. This includes automatic updates and bug fixes for
supported versions of database engines, ensuring customers always operate on secure and
stable software.
Backup and High Availability: AWS automates regular backups and offers features like
Multi-AZ deployments, which provide failover support across availability zones to
maintain uptime and minimize disruptions. AWS also manages storage scaling, data
replication, and recovery processes.
Compliance and Security in the Cloud: AWS handles security at the infrastructure level by
managing firewalls and supporting encryption both at rest and during transit. The company
ensures compliance with global standards but customers must also ensure their use of AWS
services aligns with applicable regulations.
Customer Responsibilities
Database Configuration and Tuning: Customers are tasked with configuring the database
and setting parameters to optimize query performance. Tuning queries is essential for
improving performance and users can leverage tools like Amazon RDS Performance
Insights to identify potential bottlenecks.
Data Security and Access Control: Customers must encrypt any sensitive data stored in the
database and manage user access using AWS IAM roles and policies. Effective permission
management is essential to prevent unauthorized access to databases.
Monitoring and Query Performance: Customers are responsible for monitoring database
activity and query performance. Using tools like RDS Performance Insights along with
CloudWatch and CloudTrail they must consistently track queries and workloads to
maintain efficient database operation and promptly resolve performance issues.
In the Amazon RDS Shared Responsibility Model AWS and the customer share duties to ensure
the security availability and performance of database services. AWS manages the infrastructure
while customers take control of their data and database management.
Managed via Amazon RDS: Aurora utilizes the Amazon RDS platform for administrative
tasks such as provisioning patching backups and recovery. This management is performed
through the AWS Management Console AWS CLI and API and allows developers and
system administrators to focus on building and running their applications rather than
dealing with underlying infrastructure management.
Operations Based on Clusters: Unlike standard RDS instances Aurora operates on entire
clusters of database servers that are automatically replicated. This architecture ensures high
availability easy scaling and efficient resource management
High Availability: Aurora replicates data across multiple Availability Zones for fault
tolerance and automatic failover is handled by RDS in case of any failure.
Automated Scalability with Aurora: Automated Scalability Aurora takes advantage of RDS
automatic scaling capabilities which enable it to adjust storage and compute resources
dynamically based on real-time workload demands.
Seamless Data Migration Migrating: Seamless Data Migration Migrating from Amazon
RDS for MySQL or PostgreSQL to Aurora is simple. You can use Amazon RDS snapshots
or set up one-way replication to transfer your data smoothly. This allows you to benefit
from Aurora’s improved performance, scalability, and reliability without interrupting
existing workflows.
DB Engine Selection: When setting up a new database in Amazon RDS, users can opt for
Aurora MySQL or Aurora PostgreSQL as the engine of choice. This offers the same
familiarity as using traditional MySQL or PostgreSQL engines but with Aurora’s
performance boosts and reliability features.
Amazon Aurora uses a pay-as-you-go pricing model which means you only pay for the resources
you actually use. Here is a quick overview of the main factors that influence Aurora's pricing.
Instance Pricing: Aurora charges based on the instance type and size you choose, with
different prices for MySQL and PostgreSQL compatible instances. Larger instances cost
more, but they also provide higher performance.
Storage Costs: Aurora scales storage automatically according to your usage and you are
charged per gigabyte of storage used. The benefit here is you only pay for the storage you
need.
Backup Storage: Aurora includes automated backups at no additional charge for up to the
same amount of storage as your database. Additional backup storage is charged per GB.
I/O Requests: You are billed for the input/output operations (I/O requests) performed by
your database. Aurora offers cost efficiency by using optimized I/O operations for high
performance.
Data Transfer: Data transfer between Amazon Aurora and other AWS services is generally
free within the same region and while charges may apply for cross-region data transfer.
Security: Aurora is service from Amazon, the user is assured about the security and can
use the IAM features.
Availability: Multiple replications of DB instance, over numerous zones guarantees high
accessibility.
Scalability: With Aurora serverless, the user can set-up the database to automatically scale
up and scale down with application demand.
Upkeep: Aurora has zero server maintenance. 5 times faster than MySQL and 3 times faster
than PostgreSQL
Management Console: Amazon Management Console is easy to use and drag features to
immediately set-up the Aurora Cluster.
At present backings MySQL-5.6.10 so if the user needs new features or want an older
version of MySQL then the user can't access it.
The user can't use MyISAM tables since Aurora only supports InnoDB at present.
What is DynamoDB?
Amazon DynamoDB is a cloud-native NoSQL primarily key-value database. Let’s define each
of those terms.
DynamoDB is cloud-native in that it does not run on-premises or even in a hybrid cloud; it only
runs on Amazon Web Services (AWS). This enables it to scale as needed without requiring a
customer’s capital investment in hardware. It also has attributes common to other cloud-native
applications, such as elastic infrastructure deployment (meaning that AWS will provision more
servers in the background as you request additional capacity).
DynamoDB is NoSQL in that it does not support ANSI Structured Query Language (SQL).
Instead, it uses a proprietary API based on JavaScript Object Notation (JSON). This API is
generally not called directly by user developers, but invoked through AWS Software Developer
Kits (SDKs) for DynamoDB written in various programming languages (C++, Go, Java,
JavaScript, Microsoft .NET, Node.js, PHP, Python and Ruby).
DynamoDB is primarily a key-value store in the sense that its data model consists of key-value
pairs in a schemaless, very large, non-relational table of rows (records). It does not support
relational database management systems (RDBMS) methods to join tables through foreign keys.
It can also support a document store data model using JavaScript Object Notation (JSON).
DynamoDB’s NoSQL design is oriented towards simplicity and scalability, which appeal to
developers and devops teams respectively. It can be used for a wide variety of semistructured data-
driven applications prevalent in modern and emerging use cases beyond traditional databases, from
the Internet of Things (IoT) to social apps or massive multiplayer games. With its broad
programming language support, it is easy for developers to get started and to create very
sophisticated applications using DynamoDB.
Outside of Amazon employees, the world doesn’t know much about the exact nature of this
database. There is a development version known as DynamoDB Local used to run on developer
laptops written in Java, but the cloud-native database architecture is proprietary closed-source.
While we cannot describe exactly what DynamoDB is, we can describe how you interact with it.
When you set up DynamoDB on AWS, you do not provision specific servers or allocate set
amounts of disk. Instead, you provision throughput — you define the database based
on provisioned capacity — how many transactions and how many kilobytes of traffic you wish to
support per second. Users specify a service level of read capacity units (RCUs) and write capacity
units (WCUs).
As stated above, users generally do not directly make DynamoDB API calls. Instead, they will
integrate an AWS SDK into their application, which will handle the back-end communications
with the server.
DynamoDB data modeling needs to be denormalized. For developers used to working with both
SQL and NoSQL databases, the process of rethinking their data model is nontrivial, but also not
insurmountable.
Atomicity
Consistency
Isolation
Durability
However, without the use of transactions, DynamoDB is usually considered to display BASE
properties:
Basically Available
Soft-state
Eventually consistent
Amazon Web Services (AWS) guarantees that DynamoDB tables span Availability Zones. You
can also distribute your data across multiple regions in the database global tables to provide greater
resiliency in case of a disaster. However, with global tables you need to keep your data eventually
consistent.
When to Use DynamoDB
Amazon DynamoDB is most useful when you need to rapidly prototype and deploy a key-value
store database that can seamlessly scale to multiple gigabytes or terabytes of information — what
are often referred to as “Big Data” applications. Because of its emphasis on scalability and high
availability DynamoDB is also appropriate for “always on” use cases with high volume
transactional requests (reads and writes).
DynamoDB is inappropriate for extremely large data sets (petabytes) with high frequency
transactions where the cost of operating DynamoDB may make it prohibitive. It is also important
to remember DynamoDB is a NoSQL database that uses its own proprietary JSON-based query
API, so it should be used when data models do not require normalized data with JOINs across
tables which are more appropriate for SQL RDBMS systems.
DynamoDB can be developed using Software Development Kits (SDKs) available from Amazon
in a number of programming languages.
C++
Clojure
Coldfusion
Erlang
F#
Go
Groovy/Rails
Java
JavaScript
.NET
Node.js
PHP
Python
Ruby
Scala
There are also a number of integrations for DynamoDB to connect with other AWS services and
open source big data technologies, such as Apache Kafka, and Apache Hive or Apache
Spark via Amazon EMR.
DynamoDB Streams
You can think of it like a change log for your DynamoDB table — every time something
changes, an event is recorded.
Key Features
Time-ordered: Events are stored in the same order in which changes occur.
Up to 24-hour retention: After 24 hours, the data is automatically removed.
Granular details: Can store before and after images of items.
Real-time event-driven: Easily integrate with AWS Lambda to react instantly to
changes.
Shard-based: Data is partitioned into shards for high scalability.
How it Works
1. Enable Streams on a table (you choose what kind of data images to capture):
o Keys only – Only primary key attributes.
o New image – Full item after the change.
o Old image – Full item before the change.
o New and old images – Both before and after.
2. When an item changes:
o DynamoDB writes the change event to the stream.
o The event is assigned a sequence number and placed in the right shard.
3. Consumers read from the stream:
o AWS Lambda – Triggers automatically on each event.
o Kinesis Adapter – Reads with Kinesis Client Library (KCL).
o Custom consumers – You can poll the stream via the DynamoDB Streams API.
Common Use Cases
Architecture Example
Example scenario:
Flow:
Purpose: Allow developers to interact with AWS services using popular programming
languages.
Languages Supported: Java, Python (boto3), JavaScript, .NET, PHP, Ruby, Go, C++,
etc.
Use Case: Integrating AWS capabilities directly into applications.
4. AWS CloudFormation
Purpose: Infrastructure as Code (IaC) service for modeling and setting up AWS
resources automatically.
Key Features:
o Define templates in JSON or YAML.
o Automates provisioning and updates.
o Supports cross-region and cross-account deployment.
Use Case: Repeatable, version-controlled environment deployment.
5. AWS CloudTrail
6. Amazon CloudWatch
Purpose: Monitoring and observability service for AWS resources and applications.
Key Features:
o Metrics, logs, and alarms.
o Automated actions based on thresholds.
o Dashboards for real-time insights.
Use Case: Performance monitoring, operational alerts, and proactive scaling.
Purpose: Centralized operational hub for managing AWS and hybrid cloud
infrastructure.
Key Features:
o Run Command for remote automation.
o Patch Manager for security updates.
o Parameter Store for configuration data.
Use Case: Automating patching, software inventory, and remote operations.
9. AWS Config
Purpose: Simplifies license tracking and compliance for software running on AWS.
Use Case: Avoids license overuse and simplifies vendor audits.
Amazon CloudWatch
Amazon CloudWatch is a service used for monitoring and observing resources in real-time, built
for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch
provides users with data and actionable insights to monitor their respective applications, stimulate
system-wide performance changes, and optimize resource utilization. CloudWatch collects
monitoring and operational data in the form of logs, metrics, and events, providing its users with
an aggregated view of AWS resources, applications, and services that run on AWS. The
CloudWatch can also be used to detect anomalous behavior in the environments, set warnings and
alarms, visualize logs and metrics side by side, take automated actions, and troubleshoot issues.
Amazon CloudWatch is an open-source lightweight tool that is used to collect the data of the
resources in which they are deployed. Some of the data is as follows
Metrics: Amazon CloudWatch agent will record the data of CPU utilization, memory
usage, disk i/o other system-level stats.
Logs: It will collect all the logs which are used for the further analysis
Amazon CloudWatch is a monitoring and observability service provided by Amazon Web Services
(AWS) that enables users to collect and track metrics, monitor log files, set alarms, and
automatically react to changes in AWS resources. It helps users gain insights into the operational
health, performance, and resource utilization of their AWS infrastructure and applications.
Amazon Cloud Watch is a monitoring service offered by Amazon Web Services to monitor
applications like the following.
Performance.
You can set the alarm to the to the resource use of the applications when the limits are exceeded
then you will get the notification to the mail automatically.
How Amazon CloudWatch Works
At first Amazon Cloud watch will configured to the resource that you want to monitor from there
the agents that are configured will be used to collect the logs from the resources the service may
be run on-premises or AWS. CloudWatch also provides the overall view of the resources with the
help of a dashboard from where you can troubleshoot the issues. CloudWatch also performs the
operational changes depending on the changes made to the resources they will also perform
the AWS auto-scaling of the resources depending on the changes that occurred. CloudWatch
performs real-time analysis based on the logs that have been received.
Metrics
It represents a time-ordered set of data points that are published to Amazon CloudWatch.
Metric is a variable that is monitored and data points are the value of that variable over
time.
They are uniquely defined by a name, namespace, and zero or more dimensions.
Metric math is used to query multiple CloudWatch metrics and use math expressions to
create new time series based on these metrics
Dimensions
Dimensions are the unique identifiers for a metric, so whenever you add a unique
name/value pair to one of the metrics, you are creating a new variation of that metric.
Statistics
The few available statistics on CloudWatch are maximum, minimum, sum, average, and
sample count.
Alarm
It watches a single metric over a specified time period and performs one or more specified
actions based on the value of the metric.
The estimated AWS charges can also be monitored using the alarm.
Percentiles
It helps the user to get a better understanding of the distribution of metric data.
CloudWatch dashboard
CloudWatch agent
It is required to be installed.
It collects logs and system-level metrics from EC2 instances and on-premises servers.
CloudWatch Events
CloudWatch events help you to create a set of rules that match with any event(i.e. stopping
of EC2 instance).
These events can be routed to one or more targets like AWS Lambda functions, Amazon
SNS Topics, Amazon SQS queues, and other target types.
CloudWatch Events observes the operational events continuously and whenever there is
any change in the state of the event, it performs the action by sending notifications,
activating lambda, etc.
An event indicates a change in the AWS environment. Whenever there is a change in the
state of AWS resources, events are generated.
Target process events. They include Amazon EC2 instances, AWS Lambda functions, etc.
A target receives the events in JSON format.
CloudWatch logs
Amazon CloudWatch logs enable you to store, monitor, and access files from AWS
resources like Amazon EC2 instances, Route53, etc.
It also helps you to troubleshoot your system errors and maintain the logs in highly durable
storage.
It also creates log of information about the DNS queries that Route 53 receives.
Notifying gfg website management team when the instance on which gfg website is hosted stops
Whenever the CPU utilization of instance (on which GeeksForGeeks website is hosted ) goes
above 80%, CloudWatch event is triggered. This CloudWatch event then activates the SNS topic
which sends the alert email to the attached gfg subscribers.
CloudWatch can be used to monitor the performance of AWS resources, applications, and
infrastructure components in real-time
CloudWatch allows users to set up alarms that trigger notifications or automated actions in
response to changes in the state of their resources.
CloudWatch can be used to store, search, and analyze log data from various AWS services,
applications, and infrastructure components.
CloudWatch can be used to monitor the performance of EC2 instances, RDS databases,
and other resources, which can then be used to trigger automatic scaling events.
It improves the total cost of ownership by providing alarms and also takes automated
actions when there is an error in limits provided.
Applications and resources can be optimized by examining the logs and metric data.
Detailed Insights from the application are provided through data like CPU utilization,
capacity utilization, memory utilization, etc.
It provides a great platform to compare and contrast the data produced by various AWS
services.
Cloud Watch may not be able to handle large amounts of log data, especially during spikes
in usage, making it difficult to maintain a consistent level of monitoring and logging.
The monitoring and logging processes of CloudWatch can consume significant system
resources, impacting the overall performance of an application.
Integrating CloudWatch with other AWS services and third-party tools can be challenging.
Setting up and managing CloudWatch can be complex, especially for users who are not
familiar with cloud-based systems.
Challenges of CloudWatch
Limited Visibility and Granularity: CloudWatch provides metrics and logs at a high level,
which may lack the granularity needed for detailed analysis and troubleshooting. Users
may encounter difficulty in pinpointing the root cause of issues due to limited visibility
into specific system components or resources.
Free Tier: Amazon cloud watch offers free tier up to 7 metrics, 3 alarms and 500 custom
dashboards per month and log storage up to 5 Gb per month.
Pay-as-you-go: You will be charged according to the base charge like each metric had its
base charge and log will be charged based on per gb for dashboard you will be charged
according to the per dash board. You will basically charged according to how much you
use.
CloudWatch Metrics
1. Introduction
Amazon CloudWatch Metrics are time-ordered sets of data points that represent the
performance of your AWS resources, applications, or custom systems.
They’re essentially numeric measurements collected over time, used for monitoring, alerting, and
operational insights.
For example:
2. Key Properties
Property Description
Namespace Logical container for metrics (default AWS services or custom namespaces).
Metric Name Describes what is being measured.
Dimensions Attributes to uniquely identify a metric (up to 30 per metric).
Datapoints Individual time-stamped measurements.
Unit Standard measurement unit (e.g., Seconds, Bytes, Percent).
3. Types of Metrics
1. AWS-Provided Metrics
o Automatically published by AWS services.
o Example: EC2 → CPUUtilization, DiskReadOps, NetworkIn.
2. Custom Metrics
o Created by you using PutMetricData API or SDK.
o Useful for application-level monitoring (e.g., number of active users).
3. High-Resolution Metrics
o Granularity of 1 second (instead of default 1 minute).
o Useful for fast-changing systems like real-time gaming or high-frequency trading.
4. Metric Granularity & Retention
CloudWatch stores metrics at different resolutions:
5. Using Metrics
Dashboards → Visualize metrics in real-time.
Alarms → Set thresholds for metrics to trigger actions.
Logs & Insights → Combine metric trends with log data.
Anomaly Detection → Automatic learning of normal patterns and detecting deviations.
6. Best Practices
CloudWatch Alarms
Amazon CloudWatch Alarms are a core AWS monitoring feature that automatically track and
respond to changes in your resources’ metrics. They let you define specific thresholds for
performance or operational metrics, and when those thresholds are breached, CloudWatch can
notify you or trigger automated actions.
Monitor AWS resources and custom metrics (e.g., CPU utilization, latency, error
rates).
Alert administrators via Amazon Simple Notification Service (SNS), email, or SMS
when thresholds are crossed.
Trigger automated actions like scaling EC2 instances up or down, restarting services, or
running AWS Lambda functions.
2. Core Components
1. Metric
o The data you want to monitor (e.g., CPUUtilization for an EC2 instance,
Invocations for a Lambda function).
o Can be AWS-provided metrics or custom metrics you publish.
2. Statistic or Math Expression
o Determines how the metric values are aggregated (e.g., Average, Sum, Maximum,
Minimum, p90 percentile).
3. Period
o The length of time over which the metric data is aggregated (e.g., 60 seconds, 5
minutes).
4. Threshold
o The value that triggers the alarm if crossed.
5. Comparison Operator
o How the metric is compared to the threshold:
GreaterThanThreshold
GreaterThanOrEqualToThreshold
LessThanThreshold
LessThanOrEqualToThreshold
EqualToThreshold
6. Evaluation Periods
o Number of consecutive periods that must breach the threshold before the alarm
changes state.
3. Alarm States
1. Metric Alarms
o Monitor a single metric or a math expression based on multiple metrics.
2. Composite Alarms
o Combine multiple alarms into a single one.
o Uses Boolean logic (AND/OR) to reduce noise from multiple alerts.
Scenario: Trigger an alarm if CPU utilization of an EC2 instance exceeds 80% for 5 minutes.
1. Metric: CPUUtilization
2. Statistic: Average
3. Period: 60 seconds
4. Threshold: 80%
5. Evaluation Periods: 5
6. Action: Send SNS notification and trigger Auto Scaling.
8. Best Practices
AWS CloudTrail
AWS CloudTrail is a service that enables governance, compliance, and operational and risk
auditing of your AWS account. It records and logs every API call made on your AWS account,
capturing details such as the identity of the API caller, the time of the API call, the source IP
address, the request parameters, and the response elements returned by the AWS service. This
comprehensive logging allows you to track changes and activities across your AWS
infrastructure, helping with security analysis, resource change tracking, troubleshooting, and
meeting compliance requirements.
Cloud Trail Lake: A managed data lake called AWS Cloud Trail Lake is used to record,
store, access, and analyze user and API activity on AWS for audit and security reasons.
Existing events in row-based json format are converted to Apache ORC format by Cloud
Trail Lake. A columnar storage format called ORC is designed for quick data retrieval.
Event data stores, which are immutable collections of events based on criteria you choose
by using sophisticated event selectors, aggregate events into immutable collections. The
event data can be kept in an event data storage for a maximum of seven years (2557 days).
Using AWS Organizations, you may construct an event data store for a single AWS
account or for a number of AWS accounts. Any Cloud Trail logs that you currently have
can be imported into an existing or new event data store from your S3 buckets. With Lake
dashboards, you can also see the top Cloud Trail event trends. See Creating event data
storage and working with AWS Cloud Trail Lake for further details.
Trails: In addition to delivering and storing events in an Amazon S3 bucket, Trails can also
deliver events to Amazon Cloud Watch Logs and the Amazon Event Bridge. These
occurrences can be entered into your security monitoring programs. You may also search
and examine your Cloud Trail logs using custom third-party programs or programs like
Amazon Athena.
Using AWS Organizations, you can build trails for a single AWS account or for a number of AWS
accounts. Your management events can be analyzed for unusual behavior in API call volumes and
error rates by logging Insights events. See Creating a trail for your AWS account for further details.
AWS Account is created in the AWS environment in the diagram above. When a new account is
created, Cloud Trail is activated. An API call is made in the Back End whenever we carry out any
operation using an AWS account, such as signing in, creating and deleting EC2 instances, creating
S3 buckets, and uploading data into them. An API request is made on the backend when the activity
occurs.
The activities that we carry out with our AWS Account can be carried out in a variety of ways. For
instance, we can use the account with the aid of the AWS CLI (AWS - Command-line Interface),
and we can also carry out the activity using the SDK (Software Development Kit) or AWS
Management Console. We may use any method here, and by using that method, whenever we
execute an activity from the account, the backend API is called. When the backend API is called,
an event is generated, and the event log is saved in the Cloud Trail. Only when we carry out any
activity using an AWS Account does an event get created in Cloud Trail.
The AWS account activity we perform lasts for 90 days in the same place. It is possible to keep
event logs in an S3 bucket for longer than 90 days. SNS notification (Simple Notification
Service) configuration is also possible in Cloud Trail.
CloudTrail log file: The log file integrity validation is a tool you may use to help with IT
security and auditing procedures.
Security and Compliance: Meeting security and compliance standards is made easier with
CloudTrail. It supports security incident investigation and compliance audits by assisting
enterprises in identifying illegal or suspicious activity through the monitoring
of AWS actions.
Resource Change Tracking: AWS resource changes over time can be tracked with
CloudTrail. This helps with resource management and troubleshooting by helping to spot
configuration changes, authorization changes, and resource removals.
Alerting and Notifications: Businesses can configure alerts and notifications for a variety
of events that are logged in CloudTrail logs. The prompt response to urgent situations is
made possible by this proactive monitoring.
Your Amazon Web Services (AWS) account's activity is tracked and recorded by the AWS
CloudTrail service. It offers thorough logs of all API calls and operations made on your AWS
resources. This is how AWS CloudTrail functions:
Log Storage: You can define an Amazon S3 bucket where these log entries will be gathered
and stored. For your CloudTrail logs, you may set the bucket's location and retention time.
Access Control: Policies set forth by AWS Identity and Access Management
(IAM) govern who has access to CloudTrail logs. Who is permitted to read, write, or
administer CloudTrail logs can be specified.
Alerting and Notifications: You can configure in-the-moment alerts based on particular
occurrences or trends in your CloudTrail logs using CloudWatch Alarms. This enables you
to react rapidly to operational or security incidents.
Log Generation: Each time an API is called, CloudTrail creates a log entry with
information on the caller, the action taken, the resource used, and the timestamp.
Comprehensive Logging: Captures detailed logs of API calls and activities across AWS
services, providing visibility into actions taken by users, applications, or AWS services.
Integration with AWS Services: Integrates seamlessly with other AWS services like AWS
Lambda, S3, CloudWatch Logs, and CloudWatch Events for advanced monitoring and
automated responses to events.
Event History and Insights: Provides event history timelines and insights into API activity
trends, enabling operational troubleshooting, security analysis, and operational
intelligence.
Click on the created "MyTrail" and edit the storage location. Choose "Create new S3
bucket" and save changes.
Ensure data events are configured to deliver to the AWS CloudTrail console, Amazon S3
buckets, and optionally Amazon CloudWatch Logs.
Navigate to the S3 bucket, locate the first file, download it, and review the JSON formatted
data events.
Accessing CloudTrail
AWS CLI: Use commands like aws cloudtrail create-trail, aws cloudtrail describe-trails,
and aws cloudtrail lookup-events to manage trails, retrieve event history, and perform
automated tasks.
AWS SDKs: Integrate CloudTrail into your applications using SDK functions to
programmatically manage trails, retrieve and process event data, and incorporate
CloudTrail insights into application logic.
AWS CloudTrail API: Develop custom applications or scripts that interact directly with
CloudTrail API endpoints to automate tasks, perform complex queries, and integrate
CloudTrail data into external systems or reporting tools.
Security and Compliance Monitoring: Monitor API calls and actions across AWS services
to detect unauthorized access, changes to resources, and potential security breaches.
CloudTrail logs provide detailed visibility for compliance audits and regulatory
requirements.
Change Management and Auditing: Track changes made to AWS resources over time,
including configuration changes, deployments, and updates. CloudTrail logs enable
auditing of resource history, aiding in change management and maintaining configuration
integrity.
Incident Response and Forensics: Use CloudTrail logs during incident response to
reconstruct events, analyze the scope of an incident, and identify impacted resources.
Facilitates forensic investigation and timely resolution of security or operational incidents.
AWS Config
AWS Config is a service provided by Amazon Web Services (AWS) that empowers you to
evaluate, review, and assess the configurations of your AWS resources. It persistently monitors
and records the configuration changes that happen inside your AWS environment, giving insights
into resource configuration history and encouraging compliance, security, and operational best
practices.
AWS Config makes a difference in keeping control and perceivability over your AWS
infrastructure by following changes to resource configurations over time. It captures points of
interest such as configurations changes, connections between resources, and the in general state of
your environment.
By leveraging AWS Config organizations can ensure that their AWS resources comply with inside
policies, industry regulations, and security benchmarks. It moreover makes a difference identify
unauthorized changes, evaluate compliance with wanted setups, and remediate non-compliant
resources. AWS Config improves perceivability, control, and administration of your AWS
environment.
Configuration Items: These are the resources that AWS Config monitors. They incorporate
metadata, for example, asset type, ID, design, and connections.
Config Rules: Rules that you characterize to implement wanted designs or consistence
necessities. AWS Config considers these guidelines in contrast to setup changes and reports
consistence status.
Recorder for Config: A service that records designs of upheld assets in your AWS account.
Delivery Channel: Indicates where AWS Config sends configuration change notices, for
example, Amazon S3 buckets or Amazon SNS topics.
The configurations of your AWS resources are continuously monitored and stored in a centralized
repository by AWS Config. Here is a brief outline of how it works
Tracking of Change: It tracks changes to resource designs over the long run, distinguishing
when alterations happen, what was changed, and who rolled out the improvement. This
gives a far reaching review trail of setup changes.
Rule Evaluation: AWS Config allows you to define rules to implement wanted
arrangements or consistence necessities. AWS Config provides status updates on
compliance after these rules are compared to modifications to the configuration.
Notifications and Alerts: AWS Config either integrates with AWS Lambda for custom
response actions or sends notifications via Amazon SNS (Simple Notification Service)
whenever a configuration change breaks a defined rule or sets off an alert condition.
Centralized Management: AWS Config gives you a centralized look at your AWS
environment, making it possible to look at trends in configuration, fix problems, and keep
your resources in compliance.
Automation: Through incorporation with AWS Lambda, you can automate responses to
arrangement changes in view of predefined rules, you can use this to enforce policies, fix
problems, or take action when something happens.
Now login to AWS Console by using your credentials or create new account
AWS Config provides a detailed view of the resources associated with your AWS account,
including how they are configured, how they are related to one another, and how the configurations
and their relationships have changed over time
Daily recording: Receive configuration data once every day only if a change has occurred.
Amazon S3 bucket
Create a bucket
Click on Next
Step 6: Verify
Now go to AWS Config Dashboard and check AWS Config usage metrics
Tracking of Change: It tracks changes to resource setups over the long run, assisting you
with understanding who rolled out the improvement, when it happened, and what was
changed.
Assurance of Compliance: By allowing you to define and enforce compliance rules, you
can ensure that your AWS environment adheres to industry standards, best practices, and
internal policies.
Automation Support: Because AWS Config works with AWS Lambda you can automate
responses to changes to the configuration based on predefined rules.
AWS Config pricing depends on two fundamental factors: the number of configuration things
recorded and the quantity of dynamic Config rules.
Configuration Items: AWS Config charges based of the quantity of configuration things
recorded. Setup things address the resources in your AWS account that AWS Config
monitors and tracks designs for, for example, EC2 instances, S3 buckets, IAM roles, and
so on.
Active Config Rules: AWS Config likewise charges in based of the quantity of dynamic
Config rules you have conveyed. Config rules are utilized to define desired arrangements
or consistence prerequisites for your AWS resources. The pricing is determined by the
quantity of decides that are effectively assessing resources setups.
1. Overview
AWS Systems Manager is a service that provides centralized operational control for your
AWS resources.
It allows you to:
Think of it as a control tower for your EC2 instances, RDS databases, S3 buckets, and even on-
premise servers.
2. Key Features
a) Session Manager
c) Patch Manager
d) Automation
e) Parameter Store
Central place to store configuration data and secrets (like database connection strings).
Supports plain text and encrypted values (integrates with AWS KMS).
Useful for application configuration management.
f) Inventory
Collects metadata about your instances (OS version, installed software, applications,
etc.).
Helps with compliance and audit reporting.
g) State Manager
h) OpsCenter
3. How It Works
4. Benefits
6. Pricing
Free tier for basic functionality (Run Command, Session Manager, Parameter Store
standard parameters).
Costs apply for:
o Advanced parameters in Parameter Store.
o Automation executions beyond free tier.
o Managed instance inventory collection.
o Session recording and data transfer.
1. Classic Load Balancer: It is the traditional form of load balancer which was used initially.
It distributes the traffic among the instances and is not intelligent enough to support host-
based routing or path-based routing. It ends up reducing efficiency and performance in
certain situations. It is operated on the connection level as well as the request level.
Classic Load Balancer is in between the transport layer (TCP/SSL) and the application
layer (HTTP/HTTPS).
2. Application Load Balancer: This type of Load Balancer is used when decisions are to be
made related to HTTP and HTTPS traffic routing. It supports path-based routing and host-
based routing. This load balancer works at the Application layer of the OSI Model. The
load balancer also supports dynamic host port mapping.
3. Network Load Balancer: This type of load balancer works at the transport layer(TCP/SSL)
of the OSI model. It’s capable of handling millions of requests per second. It is mainly
used for load-balancing TCP traffic.
4. Gateway Load Balancer: Gateway Load Balancers provide you the facility to deploy, scale,
and manage virtual appliances like firewalls. Gateway Load Balancers combine a
transparent network gateway and then distribute the traffic.
Step 1: Launch the two instances on the AWS management console named Instance A and Instance
B. Go to services and select the load balancer. To create AWS free tier account refer to Amazon
Web Services (AWS) – Free Tier Account Set up.
Step 4: Here you are required to configure the load balancer. Write the name of the load balancer.
Choose the scheme as internet facing.
Step 5: Add at least 2 availability zones. Select us-east-1a and us-east-1b
Step 6: We don't need to do anything here. Click on Next: Configure Security Groups
Step 7: Select the default security group. Click on Next: Configure Routing
Step 8: Choose the name of the target group to be my target group. Click on Next: Register Targets.
Step 9: Choose instance A and instance B and click on Add to register. Click on Next: Review.
Step 11: Congratulations!! You have successfully created a load balancer. Click on close.
Step 12: This highlighted part is the DNS name which when copied in the URL will host the
application and will distribute the incoming traffic efficiently between the two instances.
Step 13: This is the listener port 80 which listens to all the incoming requests
Step 15: Now we need to delete the instance. Go to Actions -> Click on Delete.
Features of cloud
No up-front investment
Lowering operating cost
Highly scalable and efficient
Easy access
Reducing business risks and maintenance expenses
ELB automatically distributes incoming application traffic across multiple targets, such
as EC2 instances, containers, and IP addresses, to achieve high availability.
It can automatically scale to handle changes in traffic demand, allowing you to maintain
consistent application performance.
It can monitor the health of its registered targets and route traffic only to the healthy targets.
It evenly distributes traffic across all availability zones in a region, improving fault
tolerance.
Disadvantages of Elastic Load Balancer
ELB can add latency to your application, as traffic must pass through the load balancer
before being routed to your targets.
It has limited customization options, so you may need to use additional tools and services
to fully meet your application's requirements.
It can increase your overall AWS costs, especially if you have high traffic volumes or
require multiple load balancers.
Scaling Amazon EC2 means you start with the resources you require at the time of starting
your service and build your architecture to automatically scale in or out, in response to the
changing demand. As a result, you only pay for the resources you utilize. You don't have to be
concerned about running out of computational power to satisfy your consumer's demand.
Pay For You Use: In auto scaling the resource will be utilised in the optimised way where
the demand is low the resource utilisation will be low and the demand will high the resource
utilisation will increase so the AWS is going to charge you only for the amount of resources
you really used.
Automatic Performance Maintenance: AWS auto scaling maintains the optimal application
performance with considering the workloads it will ensures that the application is running
to desired level which will decrease the latency and also the capacity will be increased by
based on your application.
Example: Here it involves a simple web application that helps employees locate conference
rooms for virtual meetings. In this scenario, the app sees light usage at the start and end of the
week. However, as more employees book meetings midweek, the demand for the application
rises during that period. The graph below shows the usage of the application’s capacity over a
week:
You can prepare for fluctuating capacity by provisioning enough servers to handle peak traffic,
guaranteeing the application always meets demand. However, this approach often leads to
excess capacity on slower days, which raises the overall operating costs. Alternatively, you
could allocate resources based on average demand, which reduces costs by avoiding
unnecessary equipment for occasional spikes. However, this might negatively impact user
experience when demand surpasses available capacity. EC2 Auto Scaling addresses this
problem by automatically adding instances as demand increases and removing them when no
longer needed. It uses EC2 instances, allowing you to pay only for what you actually use,
resulting in a more cost-efficient architecture that reduces unnecessary expenses.
Amazon EC2 Auto Scaling
Amazon EC2 auto-scaling will helps you to scale the resources of EC2 depending on the
demand of incoming traffic. It will maintain the high availability and optimize the cost of
AWS EC2.
EC2 Auto Scaling is will helps to create collection of EC2 instances called an Autoscaling
group where load balancer will transfer the load to this instances. The minimum, maximum
and preferred capacity for your Auto Scaling group can then be specified. To keep instances
running at the appropriate capacit EC2 Auto Scaling will start and stop them automatically.
EC2 auto scaling will offers you to configure the policies where you mention the details
like at which percent of CPU utillizaion or memory usage you need to scale the instance
based on the demand. They can be scaled automatically based on the traffic to the demand.
Groups: For scaling and managing the EC2 instances are grouped together so that they may
be thought of as a single logical entity. You can mention the minimum and maximum
number of EC2 instance are required based up on the demand of the incoming traffic.
Scaling Options: AWS Autoscaling provides number of options some of them are
mentioned as following.
o Dynamic scaling
o Predictive scaling
o Scheduled scaling
o Manual scaling
That's the point where Amazon EC2 Autoscaling comes into the picture. You may use Amazon
EC2 Auto Scaling in order to add or delete Amazon EC2 instances with respect to changes in
your application demand. You can maintain a higher feeling of application availability by
dynamically scaling your instances in and out as needed.
Here are the some most important features of Aws Auto scaling
Dynamic Scaling: Adapts to changing environments and responds with the EC2 instances
as per the demand. It helps the user to follow the demand curve for the application, which
ultimately helps the maintainer/user to scale the instances ahead of time. Target tracking
scaling policies, for example, may be used to choose a loaded statistic for your application,
such as CPU use. Alternatively, you might use Application Load Balancer's new "Request
Count Per Target" measure, which is a load balancing option for the Elastic Load Balancing
service. After that, Amazon EC2 Auto Scaling will modify the number of EC2 instances
as needed to keep you on track.
Load Balancing: Load balancing involves distributing incoming traffic across multiple
instances to improve performance and availability. Amazon Elastic Load Balancing
(ELB) is a service that automatically distributes incoming traffic across multiple instances
in one or more Availability Zones.
Computing power is a programmed resource in the cloud, so you may take a more flexible
approach to scale your applications. When you add Amazon EC2 Auto Scaling to an
application, you may create new instances as needed and terminate them when they're no
longer in use. In this way, you only pay for the instances you use, when they're in use.
Horizontal Scaling: Horizontal scaling involves adding more instances to your application
to handle increased demand. This can be done manually by launching additional instances,
or automatically using Amazon EC2 Auto Scaling, which monitors your application's
workload and adds or removes instances based on predefined rules.
Vertical Scaling: Vertical scaling involves increasing the resources of existing instances,
such as CPU, memory, or storage. This can be done manually by resizing instances, or
automatically using Amazon EC2 Auto Scaling with launch configurations that specify
instance sizes based on the workload.
Reactive Scaling: Reactive Scaling responds to changes in demand as they occur by adding
or removing instances based on predefined thresholds. This type of scaling reacts to real-
time changes, such as sudden spikes in traffic, by scaling the application accordingly.
However, it is not predictive, meaning the system adjusts only when demand changes are
detected.
Target Tracking Scaling: Target Tracking Scaling adjusts the number of instances in your
Auto Scaling group to maintain a specific metric at a target value. For example, you can
set a target for the average CPU utilization, and Auto Scaling will automatically add or
remove instances to keep the metric at the defined level.
Predictive Scaling: Helps you to schedule the right number of EC2 instances based on the
predicted demand. You can use both dynamic and predictive scaling approaches together
for faster scaling of the application. Predictive Scaling forecasts future traffic and allocates
the appropriate number of EC2 instances ahead of time. Machine learning algorithms in
Predictive Scaling identify changes in daily and weekly patterns and automatically update
projections. In this way, the need to manually scale the instances on particular days is
relieved.
Scheduled Scaling: As the name suggests allows you to scale your application based on the
scheduled time you set. For example, A coffee shop owner may employ more baristas on
weekends because of the increased demand and frees them on weekdays because of
reduced demand.
Limitations of AWS EC2 Autoscaling
There are several limitations to consider when using Amazon EC2 Auto Scaling:
Number of instances: Amazon EC2 Auto Scaling can support a maximum of 500 instances
per Auto Scaling group.
Instance health checks: Auto Scaling uses Amazon EC2 instance health checks to
determine the health of an instance. If an instance fails a health check, Auto Scaling will
terminate it and launch a new one. However, this process can take some time, which can
impact the availability of your application.
Scaling policies: Auto Scaling allows you to set scaling policies based on CloudWatch
metrics, but these policies can be complex to configure and may not always scale your
application as expected.
Cost: Using Auto Scaling can increase the cost of running your application, as you may be
charged for the additional instances that are launched.
Overall, It's important to carefully consider the limitations of Amazon EC2 Auto Scaling and
how they may impact your application when deciding whether to use this service. To know the
difference between Auto scaling and load balancing refer to Auto Scaling vs Load Balancer.
Amazon EC2 Autoscaling provides the liberty to automatically scale the instances as per the
demand. Even if some problems are detected, the model replaces the unhealthy instances with
ones that are fully functional. To automate fleet management for EC2 instances, Amazon EC2
Auto Scaling will perform three major functions:
Balancing the capacities across different Availability zones: If your application has three
availability zones, Amazon EC2 Autoscaling can help you balance the number of instances
across the three zones. As a result, each zone receives no more or fewer instances than the
others, resulting in a balanced distribution of traffic and burden.
Replacing and Repairing unhealthy instances: If the instances fail to pass the health check,
Autoscaling replaces them with healthy instances. As a result, the problem of instances
crashing is reduced, and you won't have to manually verify their health or replace them if
they're determined to be unhealthy.
Monitoring the health of instances: While the instances are running, Amazon EC2 Auto
Scaling ensures that they are healthy and that traffic is evenly allocated among them. It
does health checks on the instances on a regular basis to see if they're experiencing any
issues.
Automatic Scaling: Application scaling can be done automatically based upon the
incoming traffic if the load is increased then the application will scale up and the load
decrease application will scale down automatically.
Schedule Scaling: Based the data that previously available in at which particular point of
time there going to be peak point and at which time there going to be less traffic we can
schedule the auto scaling.
Integration: You can integrate with other service in the AWS. Mainly the machine learning
which will helps to predict the incoming traffic and can scale according to the traffic.
Auto Scaling is an Amazon Web Service it allows instances to scale when traffic or CPU load
increases. Auto-scaling is a service that monitors all instances that are configured into the Auto
Scaling group and ensures that loads are balanced in all instances. Depending on the load
scaling group, increase the instance according to the configuration. When we created the auto-
scaling group, we configured the Desired capacity, Minimum capacity, maximum capacity,
and CPU utilization. If CPU utilization increases by 60% in all instances, one more instance is
created, and if CPU utilization decreases by 30% in all instances, one instance is terminated.
These are totally up to us what is our requirement. If any Instance fails due to any reason then
the Scaling group maintains the Desired capacity and starts another instance.
To know how to create autoscaling refer to Create and Configure the Auto Scaling Group in
EC2.
Every EC2 instance within an auto scaling group follows a distinct lifecycle. This lifecycle
begins when the instance is launched and concludes with its termination. Below is an
illustration of the various stages an instance goes through during its lifecycle
No additional cost for using Auto Scaling. You only pay for
Auto Scaling Service
the underlying resources (EC2 instances, etc.).
Amazon EC2 On- Starting at $0.0042 per hour (for t4g.micro, varies by instance
Demand Instances type and region).
Amazon CloudWatch Basic monitoring free, detailed monitoring starts at $0.01 per
(Monitoring) metric per month.
Scaling Plan
A blueprint for automatic Scale up or scale down of the your cloud resources in response to
incoming traffic is called a scaling plan. It will give the complete outlook of resources you
want to scale, the metrics you want to keep monitor, and the steps you want to take to scale
those resources when their metrics rise or fall below certain levels.Many cloud resources, such
as Amazon EC2 instances, Elastic Load Balancing (ELB) instances, and Amazon DynamoDB
tables, can be scaled up and down by using of scaling plans. They can also be used to expand
the resources of other cloud service providers, such Google Cloud Platform and Microsoft
Azure.
AWS CloudFormation
AWS CloudFormation is an Infrastructure as Code (IaC) offering that allows you to describe and
provision AWS infrastructure in a repeatable and automated way. You write CloudFormation
templates (in JSON or YAML) to specify the resources you require, like EC2 instances, S3
buckets, or RDS databases, and CloudFormation does the work of creating, managing, and
updating them for you.
With CloudFormation, you can treat your infrastructure as one unit, known as a stack, to be able
to easily replicate and have consistency between various AWS environments and regions. This
eliminates the requirement for manual configuration, which is prone to errors and takes time.
Amazon Web Services (AWS) is the service offered by the AWS cloud it is mainly used to
provision the service in the AWS like EC2, S3, Autoscaling, load balancing and so on you can
provision all the service automation with the Infrastructure as a code (IAC), instead of managing
all of them manually you can manage with the help of AWS Cloudformation.
1. No Upfront Investment
AWS CloudFormation operates on a pay-as-you-go model, meaning there is no need for large
upfront costs.
3. Highly Scalable
Easily scale your infrastructure up or down according to your needs without the hassle of manual
intervention.
4. Easy Access
CloudFormation is integrated with the AWS Management Console, providing users with an
intuitive interface to manage resources.
1. Infrastructure Provisioning
2. Auto-Scaling Environments
You can create auto-scaling groups using CloudFormation so that your resources scale
automatically depending on load, with optimal performance and cost.
3. Multi-Region Deployments
With CloudFormation, you can provision resources in multiple regions so that your infrastructure
is disaster-resistant or resistant to a failure in a particular region.
CloudFormation supports integration with AWS Code Pipeline, Jenkins, and other CI/CD tools so
that you can automate the deployment of infrastructure as well as application code.
1. Automation
AWS CloudFormation helps to automate the process of creating, configuring, and managing AWS
resources. This allows for the infrastructure to be deployed quickly, reliably, and repeatedly.
With AWS CloudFormation, it is possible to create standard templates of infrastructure stacks that
can be used to create identical copies of the same infrastructure. This ensures consistency in the
infrastructure deployment and makes it easier to maintain.
3. Cost savings
AWS CloudFormation helps to reduce costs by allowing customers to use existing infrastructure
templates and reuse them across multiple environments. This reduces the cost of designing and
deploying new infrastructure.
4. Security
AWS CloudFormation helps to ensure that all AWS resources are configured securely by using
security policies and rules. This helps to protect the infrastructure from potential security threats.
5. Scalability
AWS CloudFormation allows for the quick and easy scaling of resources on demand. This means
that customers can quickly and easily add resources to meet their changing needs.
Amazon Web Services is a subsidiary of Amazon.com that provides on-demand cloud computing
platforms to individuals, companies, and governments, on a paid subscription basis.
Just imagine that you have to develop an application that uses various AWS resources. When it
comes to creating and managing those resources, it can be highly time-consuming and challenging.
It can become difficult for you to develop the application when you are spending the whole time
managing those AWS resources. What if we have a service for that? So here comes AWS
Cloudformation in the picture.
Our template is created in JSON or YAML script. We will be discussing the JSON script in this
article. JSON is a text-based format that represents structured data based onresource JavaScript
object syntax. It carries the AWS resources details in the structured format according to which
AWS infrastructure is created.
Parameters: Parameters are used when you want to provide custom or dynamic values to
the stack during runtime. Therefore, we can customize templates using parameters.
Mappings: Mapping in the JSON template helps you to map keys to a corresponding named
value that you specify in a conditional parameter.
Conditions: Conditions are used to define if certain resources are created or when the
resource's properties are assigned to a value when the stack is created.
Resources: In this, you can specify the properties of AWS resources (AWS EC2 instance,
S3 bucket, AWS lambda ) you want in your stack.
Output: The output defines the value which is generated as an output when you view
your cloud Formation stack properties.
Understanding The Core Concepts That CloudFormation templates use to organize resources,
settings, and functions is key to managing AWS infrastructure efficiently.
1. Template
A CloudFormation template is a simply JSON or YAML file that defines the AWS resources to
be created and configured.
2. Stacks
Stacks are the totality of the resources which are contained within a CloudFormation Template.
When one wishes to roll out a template , they will be required to roll out a stack in which all the
available resources in that stack will be rolled out as one.
3. Formatting
JSON and YAML is used in handling templates within CloudFormation. YAML has been very
much opted for in terms of using due to forthcoming advantages in smaller templates and large
templates as well.
4. Change Sets
Change sets are used to examine what CloudFormation will do, in terms of modification or change
, to the deployed resources when an update operation is required on a specific stack. It ensures that
all the changes that are executed on the infrastructure do not create risks that were not intended.
5. Functions
CloudFormation comes with several built-in functions (like Fn::Sub, Fn::Join), and these functions
are aimed at making dynamic configuration easier so that as the resources are being deployed, their
properties can be adjusted and modified.
6. Parameters
These offer an opportunity for user interaction during the deployment of the stack thus making it
easier to create templates that are flexible and reusable. For instance, given the instance types,
VPC ids or environment variables can be indicated as parameters.
7. Conditions
Conditions enable or disable the creation of a resource depending on whether certain conditions
are true or false (for example, it may depend on the environments of production or development).
This allows for more complex template logic, deploying different resources based on provided
parameters.
Deploying a CloudFormation template can be done through multiple methods, each catering to
different preferences and workflows
The AWS Management Console offers a user-friendly way to deploy templates. Simply log in,
navigate to CloudFormation, and select "Create Stack." You can then upload your template (in
JSON or YAML format), configure parameters, tags, and permissions, and finalize by clicking
"Create Stack." This method is ideal for those who prefer a visual, straightforward interface.
2. CloudFormation Designer
For a more graphical approach, CloudFormation Designer allows users to visually build or modify
templates using a drag-and-drop interface within the AWS Console. After creating or adjusting
your template, deployment is just a click away by selecting "Create Stack." This method suits users
who enjoy visual tools for infrastructure design.
AWS CloudFormation Hooks is a powerful feature that helps ensure your CloudFormation
resources comply with your organization’s security, operational, and cost optimization standards.
CloudFormation Hooks allows you to implement custom code that proactively checks the
configuration of AWS resources before they are provisioned. If any resources are found to be non-
compliant, CloudFormation can either block the provisioning process or issue a warning while
allowing the process to continue, providing enhanced control over your infrastructure setup.
CloudFormation Hooks automatically verify that your resources meet your organization’s rules
and standards before deployment. By catching non-compliant resources early, it helps prevent
issues and ensures that only resources compliant with your policies are provisioned in your cloud
environment.
2. Personalized Checks
You have the flexibility to create custom checks tailored to your specific organizational needs.
These checks ensure resources adhere to your defined standards before they are deployed, giving
you full control over your cloud infrastructure.
CloudFormation Hooks allows you to track and manage resources throughout their lifecycle,
ensuring they remain compliant with your rules and standards from provisioning to
decommissioning.
4. Cost Optimization
By enforcing guidelines that control resource usage, CloudFormation Hooks helps prevent
unnecessary spending and ensures cost-efficient infrastructure management. You can set rules to
limit the over-provisioning of resources, effectively controlling costs and optimizing spending.
5. Enhanced Security
CloudFormation Hooks adds an extra layer of security by enforcing strict security policies during
resource provisioning. This ensures that unauthorized or risky configurations are prevented,
thereby enhancing the overall protection of your cloud environment.
How to Create an AWS CloudFormation Template
Choose an Existing Template: You can select a previously created template and customize
it to fit your needs. This option allows you to modify the template to suit your current
requirements.
Use a Sample Template: AWS provides several sample templates to help you get started.
You can choose one of these sample templates and modify it to deploy your infrastructure.
This article uses this approach. Once you select a sample template, you can customize it to
match your infrastructure setup.
If you prefer a more hands-on approach, you can use AWS Application Composer to visually
design your template. This tool offers a drag-and-drop interface, making it easier to configure
infrastructure components without needing to write code. It's a great choice if you want to build a
template visually and generate it automatically.