Untitled Document
Untitled Document
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Practice Test 2: AWS Certified Solutions Architect Associate
Practice Test 2
AWS Certified Solutions Architect Associate Practice
Test 2 - Results
Attempt 1
●
●
●
●
●
Question 1Incorrect
A company has a static corporate website hosted in a standard Amazon S3 bucket and a new
web domain name that was registered using Amazon Route 53. There is a requirement to
integrate these two services in order to successfully launch the corporate website.
What are the prerequisites when routing traffic using Route 53 to a website that is hosted in an
S3 Bucket? (Select TWO.)
Correct selection
The S3 bucket name must be the same as the domain name.
The record set must be of type "MX".
Your selection is incorrect
The Cross-Origin Resource Sharing (CORS) option should be enabled in the S3 bucket.
Correct selection
A registered domain name
Your selection is incorrect
The S3 bucket must be in the same region as the hosted zone.
Overall explanation
Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service
designed to efficiently and reliably route end-user requests to Internet applications. As part of
the AWS (Amazon Web Services) cloud ecosystem, Route 53 provides a range of features for
managing domain names, DNS records, and directing web traffic.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
scalable, durable, and low-latency storage. It is commonly used to store and retrieve large
volumes of data, including documents, images, videos, backups, and log files.
AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of
protection on top of your user name and password. With MFA enabled, when a user signs in to
an AWS website, they will be prompted for their user name and password (the first factor—what
they know), as well as for an authentication code from their AWS MFA device (the second factor
—what they have). Taken together, these multiple factors provide increased security for your
AWS account settings and resources. You can enable MFA for your AWS account and for
individual IAM users you have created under your account. MFA can also be used to control
access to AWS service APIs.
The option that says: Storing the AWS Access Keys in the EC2 instance is incorrect. This is
not only recommended by AWS as it can be compromised. Instead of storing access keys on an
EC2 instance for use by applications that run on the instance and make AWS API requests, you
can use an IAM role to provide temporary access keys for these applications.
The option that says: Assigning an IAM user for each EC2 Instance is incorrect because
there is no need to create an IAM user for this scenario since IAM roles already provide greater
flexibility and easier management.
The option that says: Storing the AWS Access Keys in ACM is incorrect because ACM is just
a service that lets you easily provision, manage, and deploy public and private SSL/TLS
certificates for use with AWS services and your internal connected resources. It is not used as a
secure storage for your access keys.
References:
https://aws.amazon.com/iam/details/mfa/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Check out this AWS IAM Cheat Sheet:
https://tutorialsdojo.com/aws-identity-and-access-management-iam/
Domain
Design Secure Architectures
Question 18Incorrect
An organization is currently using a tape backup solution to store its application data on-
premises. Plans are in place to use a cloud storage service to preserve the backup data for up
to 10 years, which may be accessed about once or twice a year.
Which of the following is the most cost-effective option to implement this solution?
Use Amazon S3 to store the backup data and add a lifecycle rule to transition the current
version to Amazon S3 Glacier.
Correct answer
Use AWS Storage Gateway to backup the data and transition it to Amazon S3 Glacier
Deep Archive.
Use AWS Storage Gateway to backup the data directly to Amazon S3 Glacier Flexible
Retrieval.
Your answer is incorrect
Order an AWS Snowball Edge appliance to import the backup directly to Amazon S3
Glacier Flexible Retrieval.
Overall explanation
Tape Gateway enables you to replace physical tapes on-premises with virtual tapes in AWS
without changing existing backup workflows. Tape Gateway supports all leading backup
applications and caches virtual tapes on-premises for low-latency data access. Tape Gateway
encrypts data between the gateway and AWS for secure data transfer and compresses data
and transitions virtual tapes between Amazon S3 and Amazon S3 Glacier Flexible Retrieval, or
Amazon S3 Glacier Deep Archive, to minimize storage costs.
The scenario requires you to back up your application data to a cloud storage service for long-
term retention of data that will be retained for 10 years. Given that the organization uses a tape
backup solution, an option that uses AWS Storage Gateway must be the possible answer. Tape
Gateway can transition virtual tapes archived in Amazon S3 Glacier Flexible Retrieval or
Amazon S3 Glacier Deep Archive storage class, enabling you to further reduce the monthly cost
to store long-term data in the cloud by up to 75%.
Hence, the correct answer is: Use AWS Storage Gateway to backup the data and transition
it to Amazon S3 Glacier Deep Archive.
The option that says: Use AWS Storage Gateway to backup the data directly to Amazon S3
Glacier Flexible Retrieval is incorrect. Although this is a valid solution, moving to S3 Glacier
Flexible Retrieval is typically more expensive than directly backing it up to Glacier Deep Archive.
The option that says: Order an AWS Snowball Edge appliance to import the backup
directly to Amazon S3 Glacier Flexible Retrieval is incorrect because Snowball Edge can't
directly integrate backups to S3 Glacier Flexible Retrieval. Moreover, you have to primarily use
the Amazon S3 Glacier Deep Archive storage class as it is more cost-effective than the Glacier
Flexible Retrieval.
The option that says: Use Amazon S3 to store the backup data and add a lifecycle rule to
transition the current version to S3 Glacier Flexible Retrieval is incorrect. Although this is a
possible solution, it is difficult to directly integrate a tape backup solution to S3 without using
Storage Gateway. Additionally, S3 Glacier Deep Archive is the most cost-effective storage class
for long-term retention.
References:
https://aws.amazon.com/storagegateway/faqs/
https://aws.amazon.com/s3/storage-classes/
Check out this AWS Storage Gateway Cheat Sheet:
https://tutorialsdojo.com/aws-storage-gateway/
Domain
Design Cost-Optimized Architectures
Question 20Incorrect
A company is building an internal application that allows users to upload images. Each upload
request must be sent to Amazon Kinesis Data Streams for processing before the pictures are
stored in an Amazon S3 bucket.
The application should immediately return a success message to the user after the upload,
while the downstream processing is handled asynchronously. The processing typically takes
about 5 minutes to complete.
Which solution will enable asynchronous processing from Kinesis to S3 in the most cost-
effective way?
Correct answer
Use Kinesis Data Streams with AWS Lambda consumers to asynchronously process
records and write them to S3.
Use a combination of AWS Lambda and Step Functions to orchestrate service
components and asynchronously process the requests.
Your answer is incorrect
Send data from Kinesis Data Streams to Amazon Data Firehose and configure it to
deliver directly to S3.
Use a combination of Amazon SQS to queue the requests and then asynchronously
process them using On-Demand Amazon EC2 Instances.
Overall explanation
AWS Lambda supports both synchronous and asynchronous invocation of functions. When
AWS services are used as event sources, the invocation type is predetermined and cannot be
changed. For Amazon Kinesis Data Streams, AWS Lambda uses an event source mapping to
poll the stream, batch records, invoke the function, and manage checkpoints and retries.
By combining Kinesis Data Streams with Lambda consumers, the application can immediately
return a success message to the user after placing the upload request into the stream. Lambda
then processes the records asynchronously and stores the results in Amazon S3. This
serverless integration is cost-effective because Lambda scales automatically, requires no server
management, and only incurs costs for the provisioned shards and actual function invocations.
Hence, the correct answer is: Use Kinesis Data Streams with AWS Lambda consumers to
asynchronously process records and write them to S3.
The option that says: Use a combination of AWS Lambda and Step Functions to
orchestrate service components and asynchronously process the requests is incorrect
because Step Functions is an orchestration service (branching, sequencing, human approval,
retries) and adds per–state transition cost without a workflow need here. A direct Kinesis Data
Streams to Lambda (event source mapping) to S3 pipeline already handles polling, batching,
and retries, making Step Functions unnecessary and less cost-effective.
The option that says: Use a combination of Amazon SQS to queue the requests and then
asynchronously process them using On-Demand Amazon EC2 Instances is incorrect
because it simply violates the requirement that uploads “must be sent to Kinesis Data Streams,”
and it replaces serverless stream processing with EC2, which requires provisioning and
managing instances (capacity, patching, scaling), reducing cost-efficiency compared with
Lambda consumers on Kinesis.
The option that says: Send data from Kinesis Data Streams to Amazon Data Firehose and
configure it to deliver directly to S3 is incorrect because Data Firehose is a managed delivery
service optimized for buffering and loading data to destinations (like S3). While it supports
optional Lambda-based transformations, they are typically synchronous and constrained by
buffering and a 5-minute invocation limit. It is a brittle fit for 5-minute processing and offers less
flexibility than a Lambda consumer on Kinesis for application logic.
References:
https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html
https://aws.amazon.com/blogs/compute/new-aws-lambda-controls-for-stream-processing-and-
asynchronous-invocations/
Check out this AWS Lambda Cheat Sheet:
https://tutorialsdojo.com/aws-lambda/
Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide:
https://tutorialsdojo.com/aws-certified-solutions-architect-associate-saa-c03/
Domain
Design Cost-Optimized Architectures
Question 21Incorrect
A company is implementing its Business Continuity Plan. As part of this initiative, the IT Director
instructed the IT team to set up an automated backup of all the Amazon EBS volumes attached
to the company’s Amazon EC2 instances. The solution must be implemented as soon as
possible and should be both cost-effective and simple to maintain.
What is the fastest and most cost-effective solution to automatically back up all of the EBS
volumes?
Your answer is incorrect
Use an EBS snapshot retention rule in AWS Backup to automatically manage snapshot
retention and expiration.
For an automated solution, create a scheduled job that calls the "create-snapshot"
command via the AWS CLI to take a snapshot of production EBS volumes periodically.
Set your Amazon Storage Gateway with EBS volumes as the data source and store the
backups in your on-premises servers through the storage gateway.
Correct answer
Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS
snapshots.
Overall explanation
Amazon Data Lifecycle Manager (DLM) automates the creation, retention, and deletion of
Amazon Elastic Block Store (EBS) snapshots. It simplifies EBS volume management by
allowing you to define policies that govern the lifecycle of these snapshots, ensuring regular
backups are created and obsolete snapshots are automatically removed.
You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation,
retention, and deletion of snapshots taken to back up your Amazon EBS volumes. Automating
snapshot management helps you to:
- Protect valuable data by enforcing a regular backup schedule.
- Retain backups as required by auditors or internal compliance.
- Reduce storage costs by deleting outdated backups.
Combined with the monitoring features of Amazon EventBridge and AWS CloudTrail, Amazon
DLM provides a complete backup solution for EBS volumes at no additional cost.
Hence, the correct answer is: Use Amazon Data Lifecycle Manager (Amazon DLM) to
automate the creation of EBS snapshots.
The option that says: For an automated solution, create a scheduled job that calls the
"create-snapshot" command via the AWS CLI to take a snapshot of production EBS
volumes periodically is incorrect because even though this is just a valid solution, you would
still need additional time to create a scheduled job that calls the "create-snapshot" command. It
would be better to use Amazon Data Lifecycle Manager (Amazon DLM) instead, as this
provides you with the fastest solution, which enables you to automate the creation, retention,
and deletion of the EBS snapshots without having to write custom shell scripts or creating
scheduled jobs.
The option that says: Set your Amazon Storage Gateway with EBS volumes as the data
source and store the backups in your on-premises servers through the storage gateway
is incorrect as the Amazon Storage Gateway is only used for creating a backup of data from
your on-premises server and not from the Amazon Virtual Private Cloud.
The option that says: Use an EBS snapshot retention rule in AWS Backup to automatically
manage snapshot retention and expiration is incorrect. AWS Backup can manage retention
rules, but is typically used for centralized backup across multiple AWS services. It is not just
focused on automating EBS snapshot creation. While possible, it is not the most cost-effective
or straightforward solution for this specific use case.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html
Check out this Amazon EBS Cheat Sheet:
https://tutorialsdojo.com/amazon-ebs/
Domain
Design Resilient Architectures
Question 23Incorrect
Both historical records and frequently accessed data are stored on an on-premises storage
system. The amount of current data is growing at an exponential rate. As the storage’s capacity
is nearing its limit, the company’s Solutions Architect has decided to move the historical records
to AWS to free up space for the active data.
Which of the following architectures deliver the best solution in terms of cost and operational
management?
Use AWS DataSync to move the historical records from on-premises to AWS. Choose
Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle
configuration to move the data from the Standard tier to Amazon S3 Glacier Deep
Archive after 30 days.
Correct answer
Use AWS DataSync to move the historical records from on-premises to AWS. Choose
Amazon S3 Glacier Deep Archive to be the destination for the data.
Your answer is incorrect
Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle
configuration to move the data from the Standard tier to Amazon S3 Glacier Deep
Archive after 30 days.
Overall explanation
AWS DataSync makes it simple and fast to move large amounts of data online between on-
premises storage and Amazon S3, Amazon Elastic File System (Amazon EFS), or Amazon FSx
for Windows File Server. Manual tasks related to data transfers can slow down migrations and
burden IT operations. DataSync eliminates or automatically handles many of these tasks,
including scripting copy jobs, scheduling, and monitoring transfers, validating data, and
optimizing network utilization. The DataSync software agent connects to your Network File
System (NFS), Server Message Block (SMB) storage, and your self-managed object storage, so
you don’t have to modify your applications.
DataSync can transfer hundreds of terabytes and millions of files at speeds up to 10 times faster
than open-source tools, over the Internet or AWS Direct Connect links. You can use DataSync
to migrate active data sets or archives to AWS, transfer data to the cloud for timely analysis and
processing, or replicate data to AWS for business continuity. Getting started with DataSync is
easy: deploy the DataSync agent, connect it to your file system, select your AWS storage
resources, and start moving data between them. You pay only for the data you move.
Since the problem is mainly about moving historical records from on-premises to AWS, using
AWS DataSync is a more suitable solution. You can use DataSync to move cold data from
expensive on-premises storage systems directly to durable and secure long-term storage, such
as Amazon S3 Glacier or Amazon S3 Glacier Deep Archive.
Hence, the correct answer is the option that says: Use AWS DataSync to move the historical
records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the
destination for the data.
The following options are both incorrect:
- Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
- Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle
configuration to move the data from the Standard tier to Amazon S3 Glacier Deep
Archive after 30 days.
Although you can copy data from on-premises to AWS with Storage Gateway, it is not suitable
for transferring large sets of data to AWS. Storage Gateway is mainly used in providing low-
latency access to data by caching frequently accessed data on-premises while storing archive
data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data
transfer to AWS by sending only changed data and compressing data.
The option that says: Use AWS DataSync to move the historical records from on-premises
to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3
lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier
Deep Archive after 30 days is incorrect because, with AWS DataSync, you can transfer data
from on-premises directly to Amazon S3 Glacier Deep Archive. You don’t have to configure the
S3 lifecycle policy and wait for 30 days to move the data to Glacier Deep Archive.
References:
https://aws.amazon.com/datasync/faqs/
https://aws.amazon.com/storagegateway/faqs/
Check out these AWS DataSync and Storage Gateway Cheat Sheets:
https://tutorialsdojo.com/aws-datasync/
https://tutorialsdojo.com/aws-storage-gateway/
Domain
Design Cost-Optimized Architectures
Question 27Incorrect
An insurance company plans to implement a message filtering feature in its web application. To
implement this solution, it needs to create separate Amazon SQS queues for each type of quote
request. The entire message processing should not exceed 24 hours.
As the Solutions Architect for the company, which of the following solutions should be
implemented to meet the above requirement?
Correct answer
Create one Amazon SNS topic and configure the SQS queues to subscribe to the SNS
topic. Set the filter policies in the SNS subscriptions to publish the message to the
designated SQS queue based on its quote request type.
Create one Amazon SNS topic and configure the SQS queues to subscribe to the SNS
topic. Publish the same messages to all SQS queues. Filter the messages in each queue
based on the quote request type.
Your answer is incorrect
Create multiple Amazon SNS topics and configure the SQS queues to subscribe to the
SNS topics. Publish the message to the designated SQS queue based on the quote
request type.
Create a data stream in Amazon Kinesis Data Streams. Use the Kinesis Client Library to
deliver all the records to the designated SQS queues based on the quote request type.
Overall explanation
Amazon SNS is a fully managed pub/sub messaging service. With Amazon SNS, you can use
topics to simultaneously distribute messages to multiple subscribing endpoints such as Amazon
SQS queues, AWS Lambda functions, HTTP endpoints, email addresses, and mobile devices
(SMS, Push).
Amazon SQS is a message queue service used by distributed applications to exchange
messages through a polling model. It can be used to decouple sending and receiving
components without requiring each component to be concurrently available.
A fanout scenario occurs when a message published to an SNS topic is replicated and pushed
to multiple endpoints, such as Amazon SQS queues, HTTP(S) endpoints, and Lambda
functions. This allows for parallel asynchronous processing.
For example, you can develop an application that publishes a message to an SNS topic
whenever an order is placed for a product. Then, two or more SQS queues that are subscribed
to the SNS topic receive identical notifications for the new order. An Amazon Elastic Compute
Cloud (Amazon EC2) server instance attached to one of the SQS queues can handle the
processing or fulfillment of the order. And you can attach another Amazon EC2 server instance
to a data warehouse for analysis of all orders received.
By default, an Amazon SNS topic subscriber receives every message published to the topic.
You can use Amazon SNS message filtering to assign a filter policy to the topic subscription,
and the subscriber will only receive a message that they are interested in. Using Amazon SNS
and Amazon SQS together, messages can be delivered to applications that require immediate
notification of an event. This method is known as fanout to Amazon SQS queues.
Hence, the correct answer is: Create one Amazon SNS topic and configure the SQS queues
to subscribe to the SNS topic. Set the filter policies in the SNS subscriptions to publish
the message to the designated SQS queue based on its quote request type.
The option that says: Create one Amazon SNS topic and configure the SQS queues to
subscribe to the SNS topic. Publish the same messages to all SQS queues. Filter the
messages in each queue based on the quote request type is incorrect because this will only
distribute the same messages on all SQS queues instead of its designated queue. You need to
fan-out the messages to multiple SQS queues using a filter policy in Amazon SNS subscriptions
to allow parallel asynchronous processing. By doing so, the entire message processing will not
exceed 24 hours.
The option that says: Create multiple Amazon SNS topics and configure the SQS queues
to subscribe to the SNS topics. Publish the message to the designated SQS queue based
on the quote request type is incorrect because to implement the solution asked in the
scenario, you only need to use one Amazon SNS topic. To publish it to the designated SQS
queue, you must set a filter policy that allows you to fan out the messages. If you didn't set a
filter policy in Amazon SNS, the subscribers would receive all the messages published to the
SNS topic. Thus, using multiple SNS topics is not an appropriate solution for this scenario.
The option that says:Create a data stream in Amazon Kinesis Data Streams. Use the
Kinesis Client Library to deliver all the records to the designated SQS queues based on
the quote request type is incorrect because Amazon KDS is not a message filtering service.
You should use Amazon SNS and SQS to distribute the topic to the designated queue.
References:
https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/
https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
Check out these Amazon SNS and SQS Cheat Sheets:
https://tutorialsdojo.com/amazon-sns/
https://tutorialsdojo.com/amazon-sqs/
Domain
Design High-Performing Architectures
Question 29Incorrect
A company has multiple VPCs with IPv6 enabled for its suite of web applications. The Solutions
Architect attempted to deploy a new Amazon EC2 instance but encountered an error indicating
that there were no available IP addresses on the subnet. The VPC has a combination of IPv4
and IPv6 CIDR blocks, but the IPv4 CIDR blocks are nearing exhaustion. The architect needs a
solution that will resolve this issue while allowing future scalability.
How should the Solutions Architect resolve this problem?
Your answer is incorrect
Set up a new IPv4 subnet with a larger CIDR range. Associate the new subnet with the
VPC and then launch the instance.
Ensure that the VPC has IPv6 CIDRs only. Remove any IPv4 CIDRs associated with the
VPC.
Disable the IPv4 support in the VPC and use the available IPv6 addresses.
Correct answer
Set up a new IPv6-only subnet with a large CIDR range. Associate the new subnet with
the VPC then launch the instance.
Overall explanation
Amazon Virtual Private Cloud (VPC) is a service that lets you launch AWS resources in a
logically isolated virtual network that you define. You have complete control over your virtual
networking environment, including selection of your own IP address range, creation of subnets,
and configuration of route tables and network gateways. You can use both IPv4 and IPv6 for
most resources in your virtual private cloud, helping to ensure secure and easy access to
resources and applications.
A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a
specified subnet. When you create a VPC, you must specify a range of IPv4 addresses for the
VPC in the form of a CIDR block. Each subnet must reside entirely within one Availability Zone
and cannot span zones. You can also optionally assign an IPv6 CIDR block to your VPC, and
assign IPv6 CIDR blocks to your subnets.
If you have an existing VPC that supports IPv4 only and resources in your subnet are
configured to use IPv4 only, you can enable IPv6 support for your VPC and resources. Your
VPC can operate in dual-stack mode, your resources can communicate over IPv4, or IPv6, or
both. IPv4 and IPv6 communication are independent of each other. You cannot disable IPv4
support for your VPC and subnets since this is the default IP addressing system for Amazon
VPC and Amazon EC2.
The scenario describes a situation where the company's IPv4 CIDR blocks are nearly
exhausted, and the Solutions Architect needs a solution that allows for future scalability. The
VPC is already IPv6-enabled. The most efficient and forward-thinking solution is to leverage the
available IPv6 address space to solve the problem of IPv4 address exhaustion. By creating an
IPv6-only subnet, the architect can launch the new EC2 instance without using any of the
scarce IPv4 addresses, thus resolving the immediate issue and ensuring long-term scalability.
Hence, the correct answer is: Set up a new IPv6-only subnet with a large CIDR range.
Associate the new subnet with the VPC then launch the instance.
The option that says: Set up a new IPv4 subnet with a larger CIDR range. Associate the
new subnet with the VPC and then launch the instance is incorrect because it is not a
scalable, long-term solution. While creating a new IPv4 subnet would temporarily solve the
immediate problem of address exhaustion, it does not address the fundamental issue of the
limited IPv4 address space. The company would eventually face the same problem again. This
approach fails to meet the requirement for future scalability and is a temporary fix rather than a
sustainable strategy.
The option that says: Ensure that the VPC has IPv6 CIDRs only. Remove any IPv4 CIDRs
associated with the VPC is incorrect because it is not technically possible. Accordingly, a VPC
is an IPv4-based network by default. You cannot create a VPC that is IPv6-only, nor can you
remove all IPv4 CIDR blocks from an existing VPC. You can, however, use an IPv6 CIDR block
alongside an IPv4 block in a dual-stack configuration, or create IPv6-only subnets within a dual-
stack VPC.
The option that says: Disable the IPv4 support in the VPC and use the available IPv6
addresses is incorrect because you cannot disable IPv4 support for a VPC. IPv4 is the default
and a mandatory component of a VPC. Disabling it would break all existing services that rely on
IPv4 connectivity and would violate the fundamental design of the VPC service. The correct
approach is to utilize IPv6 for new resources while maintaining IPv4 for existing services as part
of a migration strategy.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html
https://aws.amazon.com/vpc/faqs/
Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/
Domain
Design Resilient Architectures
Question 31Incorrect
A company has an on-premises MySQL database that needs to be replicated in Amazon S3 as
CSV files. The database will eventually be launched to an Amazon Aurora Serverless cluster
and be integrated with an RDS Proxy to allow the web applications to pool and share database
connections. Once data has been fully copied, the ongoing changes to the on-premises
database should be continually streamed into the S3 bucket. The company wants a solution that
can be implemented with little management overhead yet still highly secure.
Which ingestion pattern should a solutions architect take?
Use AWS Schema Conversion Tool (AWS SCT) to convert MySQL data to CSV files. Set
up the AWS Application Migration Service (AWS MGN) to capture ongoing changes from
the on-premises MySQL database and send them to Amazon S3.
Your answer is incorrect
Use an AWS Snowball Edge cluster to migrate data to Amazon S3 and AWS DataSync to
capture ongoing changes. Create your own custom AWS KMS envelope encryption key
for the associated AWS Snowball Edge job.
Correct answer
Create a full load and change data capture (CDC) replication task using AWS Database
Migration Service (AWS DMS). Add a new Certificate Authority (CA) certificate and create
an AWS DMS endpoint with SSL.
Set up a full load replication task using AWS Database Migration Service (AWS DMS).
Launch an AWS DMS endpoint with SSL using the AWS Network Firewall service.
Overall explanation
AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate
relational databases, data warehouses, NoSQL databases, and other types of data stores. You
can use AWS DMS to migrate your data into the AWS Cloud, between on-premises instances
(through an AWS Cloud setup) or between combinations of cloud and on-premises setups. With
AWS DMS, you can perform one-time migrations, and you can replicate ongoing changes to
keep sources and targets in sync.
You can migrate data to Amazon S3 using AWS DMS from any of the supported database
sources. When using Amazon S3 as a target in an AWS DMS task, both full load and change
data capture (CDC) data is written to comma-separated value (.csv) format by default.
The comma-separated value (.csv) format is the default storage format for Amazon S3 target
objects. For more compact storage and faster queries, you can instead use Apache Parquet
(.parquet) as the storage format.
You can encrypt connections for source and target endpoints by using Secure Sockets Layer
(SSL). To do so, you can use the AWS DMS Management Console or AWS DMS API to assign
a certificate to an endpoint. You can also use the AWS DMS console to manage your
certificates.
Not all databases use SSL in the same way. Amazon Aurora MySQL-Compatible Edition uses
the server name, the endpoint of the primary instance in the cluster, as the endpoint for SSL. An
Amazon Redshift endpoint already uses an SSL connection and does not require an SSL
connection set up by AWS DMS.
Hence, the correct answer is: Create a full load and change data capture (CDC) replication
task using AWS Database Migration Service (AWS DMS). Add a new Certificate Authority
(CA) certificate and create an AWS DMS endpoint with SSL.
The option that says: Set up a full load replication task using AWS Database Migration
Service (AWS DMS). Launch an AWS DMS endpoint with SSL using the AWS Network
Firewall service is incorrect because a full load replication task alone won't capture ongoing
changes to the database. You still need to implement a change data capture (CDC) replication
to copy the recent changes after the migration. Moreover, the AWS Network Firewall service is
not capable of creating an AWS DMS endpoint with SSL. The Certificate Authority (CA)
certificate can be directly uploaded to the AWS DMS console without the AWS Network Firewall
at all.
The option that says: Use an AWS Snowball Edge cluster to migrate data to Amazon S3
and AWS DataSync to capture ongoing changes is incorrect. While this is doable, it's more
suited to the migration of large databases which require the use of two or more Snowball Edge
appliances. Also, the usage of AWS DataSync for replicating ongoing changes to Amazon S3
requires extra steps that can be simplified with AWS DMS.
The option that says: Use AWS Schema Conversion Tool (AWS SCT) to convert MySQL
data to CSV files. Set up the AWS Application Migration Service (AWS MGN) to capture
ongoing changes from the on-premises MySQL database and send them to Amazon S3 is
incorrect. AWS SCT is not used for data replication; it just eases up the conversion of source
databases to a format compatible with the target database when migrating. In addition, using
the AWS Application Migration Service (AWS MGN) for this scenario is inappropriate. This
service is primarily used for lift-and-shift migrations of applications from physical infrastructure,
VMware vSphere, Microsoft Hyper-V, Amazon Elastic Compute Cloud (AmazonEC2), Amazon
Virtual Private Cloud (Amazon VPC), and other clouds to AWS.
References:
https://aws.amazon.com/blogs/big-data/loading-ongoing-data-lake-changes-with-aws-dms-and-
aws-glue/
https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
https://docs.aws.amazon.com/dms/latest/userguide/
CHAP_Security.html#CHAP_Security.SSL.Limitations
Check out this AWS Database Migration Service Cheat Sheet:
https://tutorialsdojo.com/aws-database-migration-service/
Domain
Design High-Performing Architectures
Question 34Incorrect
A media company wants to ensure that the images it delivers through Amazon CloudFront are
compatible across various user devices. The company plans to serve images in WebP format to
user agents that support it and return to JPEG format for those that don't. Additionally, the
company wants to add a custom header to the response for tracking purposes.
As a solution architect, what approach would one recommend to meet these requirements while
minimizing operational overhead?
Implement an image conversion service on Amazon EC2 instances and integrate it with
CloudFront. Use AWS Lambda functions to modify the response headers and serve the
appropriate format based on the User-Agent header.
Generate a CloudFront response headers policy. Utilize the policy to deliver the suitable
image format according to the User-Agent HTTP header in the incoming request.
Correct answer
Configure CloudFront behaviors to handle different image formats based on the User-
Agent header. Use Lambda@Edge functions to modify the response headers and serve
the appropriate format.
Your answer is incorrect
Create multiple CloudFront distributions, each serving a specific image format (WebP or
JPEG). Route incoming requests based on the User-Agent header to the respective
distribution using Amazon Route 53.
Overall explanation
Amazon CloudFront is a content delivery network (CDN) service that enables the efficient
distribution of web content to users across the globe. It reduces latency by caching static and
dynamic content in multiple edge locations worldwide and improves the overall user experience.
Lambda@Edge allows you to run Lambda functions at the edge locations of the CloudFront
CDN. With this, you can perform various tasks, such as modifying HTTP headers, generating
dynamic responses, implementing security measures, and customizing content based on user
preferences, device type, location, or other criteria.
References:
https://aws.amazon.com/sqs/
http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-welcome.html
Snowball is a strong choice for data transfer if you need to more securely and quickly transfer
terabytes to many petabytes of data to AWS. Snowball can also be the right choice if you don’t
want to make expensive upgrades to your network infrastructure, if you frequently experience
large backlogs of data, if you're located in a physically isolated environment, or if you're in an
area where high-speed Internet connections are not available or cost-prohibitive.
As a rule of thumb, if it takes more than one week to upload your data to AWS using the spare
capacity of your existing Internet connection, then you should consider using Snowball. For
example, if you have a 100 Mb connection that you can solely dedicate to transferring your data
and need to transfer 100 TB of data, it takes more than 100 days to complete data transfer over
that connection. You can make the same transfer by using multiple Snowballs in about a week.
Hence, the correct answer is: Order multiple AWS Snowball devices to upload the files to
S3.
The option that says: Upload it directly to S3 is incorrect because it would typically take too
long to finish due to the slow Internet connection of the company.
The option that says: Establish an AWS Direct Connect connection then transfer the data
over to S3 is incorrect because provisioning a line for Direct Connect would take too much time
and might not only give you the fastest data transfer solution. In addition, the scenario didn't
warrant an establishment of a dedicated connection from your on-premises data center to AWS.
The primary goal is to just do a one-time migration of data to AWS which can be accomplished
by using AWS Snowball devices.
The option that says: Use S3 Transfer Acceleration to speed up the upload is incorrect
because Transfer Acceleration primarily improves upload performance by utilizing AWS edge
locations, but it does not significantly boost speeds for large datasets on a limited bandwidth
connection. Transferring 250 TB over a 100 Mbps internet connection would be extremely slow.
References:
https://aws.amazon.com/snowball/
https://aws.amazon.com/snowball/faqs/
Comparison of AWS Services Cheat Sheets:
https://tutorialsdojo.com/comparison-of-aws-services/
Domain
Design Cost-Optimized Architectures
Question 42Incorrect
A company wants to streamline the process of creating multiple AWS accounts within an AWS
Organization. Each organization unit (OU) must be able to launch new accounts with
preapproved configurations from the security team which will standardize the baselines and
network configurations for all accounts in the organization.
Which solution entails the least amount of effort to implement?
Your answer is incorrect
Set up an AWS Config aggregator on the root account of the organization to enable
multi-account, multi-region data aggregation. Deploy conformance packs to standardize
the baselines and network configurations for all accounts.
Configure AWS Resource Access Manager (AWS RAM) to launch new AWS accounts as
well as standardize the baselines and network configurations for each organization unit
Correct answer
Set up an AWS Control Tower Landing Zone. Enable pre-packaged guardrails to enforce
policies or detect violations.
Centralized the creation of AWS accounts using AWS Systems Manager OpsCenter.
Enforce policies and detect violations to all AWS accounts using AWS Security Hub.
Overall explanation
AWS Control Tower provides a single location to easily set up your new well-architected multi-
account environment and govern your AWS workloads with rules for security, operations, and
internal compliance. You can automate the setup of your AWS environment with best-practices
blueprints for multi-account structure, identity, access management, and account provisioning
workflow. For ongoing governance, you can select and apply pre-packaged policies enterprise-
wide or to specific groups of accounts.
AWS Control Tower provides three methods for creating member accounts:
- Through the Account Factory console that is part of AWS Service Catalog.
- Through the Enroll account feature within AWS Control Tower.
- From your AWS Control Tower landing zone's management account, using Lambda code and
appropriate IAM roles.
AWS Control Tower offers "guardrails" for ongoing governance of your AWS environment.
Guardrails provide governance controls by preventing the deployment of resources that don’t
conform to selected policies or detecting non-conformance of provisioned resources. AWS
Control Tower automatically implements guardrails using multiple building blocks such as AWS
CloudFormation to establish a baseline, AWS Organizations service control policies (SCPs) to
prevent configuration changes, and AWS Config rules to continuously detect non-conformance.
In this scenario, the requirement is to simplify the creation of AWS accounts that have
governance guardrails and a defined baseline in place. To save time and resources, you can
use AWS Control Tower to automate account creation. With the appropriate user group
permissions, you can specify standardized baselines and network configurations for all accounts
in the organization.
Hence, the correct answer is: Set up an AWS Control Tower Landing Zone. Enable pre-
packaged guardrails to enforce policies or detect violations.
The option that says: Configure AWS Resource Access Manager (AWS RAM) to launch
new AWS accounts as well as standardize the baselines and network configurations for
each organization unit is incorrect. The AWS Resource Access Manager (RAM) service
simply helps you to securely share your resources across AWS accounts or within your
organization or organizational units (OUs) in AWS Organizations. It is not capable of launching
new AWS accounts with preapproved configurations.
The option that says: Set up an AWS Config aggregator on the root account of the
organization to enable multi-account, multi-region data aggregation. Deploy
conformance packs to standardize the baselines and network configurations for all
accounts is incorrect. AWS Config cannot provision accounts. A conformance pack is only a
collection of AWS Config rules and remediation actions that can be easily deployed as a single
entity in an account and a Region or across an organization in AWS Organizations.
The option that says: Centralized the creation of AWS accounts using AWS Systems
Manager OpsCenter. Enforce policies and detect violations to all AWS accounts using
AWS Security Hub is incorrect. AWS Systems Manager is just a collection of services used to
manage applications and infrastructure running in AWS that is usually in a single AWS account.
The AWS Systems Manager OpsCenter service is just one of the capabilities of AWS Systems
Manager, provides a central location where operations engineers and IT professionals can view,
investigate, and resolve operational work items (OpsItems) related to AWS resources.
References:
https://docs.aws.amazon.com/controltower/latest/userguide/account-factory.html
https://aws.amazon.com/blogs/mt/how-to-automate-the-creation-of-multiple-accounts-in-aws-
control-tower/
https://aws.amazon.com/blogs/aws/aws-control-tower-set-up-govern-a-multi-account-aws-
environment/
Domain
Design Secure Architectures
Question 46Incorrect
A company wants to organize the way it tracks its spending on AWS resources. A report that
summarizes the total billing accrued by each department must be generated at the end of the
month.
Which solution will meet the requirements?
Correct answer
Tag resources with the department name and enable cost allocation tags.
Tag resources with the department name and configure a budget action in AWS Budget.
Create a Cost and Usage report for AWS services that each department is using.
Your answer is incorrect
Use AWS Cost Explorer to view spending and filter usage data by Resource.
Overall explanation
A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a
value. For each resource, each tag key must be unique, and each tag key can have only one
value. You can use tags to organize your resources and cost allocation tags to track your AWS
costs on a detailed level.
After you or AWS applies tags to your AWS resources (such as Amazon EC2 instances or
Amazon S3 buckets) and you activate the tags in the Billing and Cost Management console,
AWS generates a cost allocation report as a comma-separated value (CSV file) with your usage
and costs grouped by your active tags. You can apply tags that represent business categories
(such as cost centers, application names, or owners) to organize your costs across multiple
services.
Hence, the correct answer is: Tag resources with the department name and enable cost
allocation tags.
The option that says: Tag resources with the department name and configure a budget
action in AWS Budget is incorrect. AWS Budgets only allows you to be alerted and run custom
actions if your budget thresholds are exceeded.
The option that says: Use AWS Cost Explorer to view spending and filter usage data by
Resource is incorrect. The Resource filter just lets you track costs on EC2 instances. This is
quite limited compared with using the Cost Allocation Tags method.
The option that says: Create a Cost and Usage report for AWS services that each
department is using is incorrect. The report must contain a breakdown of costs incurred by
each department based on tags and not based on AWS services, which is what the Cost and
Usage Report (CUR) contains.
References:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
https://aws.amazon.com/blogs/aws-cloud-financial-management/cost-allocation-blog-series-2-
aws-generated-vs-user-defined-cost-allocation-tag/
Check out this AWS Billing and Cost Management Cheat sheet:
https://tutorialsdojo.com/aws-billing-and-cost-management/
Domain
Design Cost-Optimized Architectures
Question 48Incorrect
A Solutions Architect created a new Standard-class Amazon S3 bucket to store financial reports
that are not frequently accessed but should immediately be available when an auditor requests
the reports. To save costs, the Architect changed the storage class of the S3 bucket from
Standard to Infrequent Access storage class.
In S3 Standard - Infrequent Access storage class, which of the following statements are true?
(Select TWO.)
It automatically moves data to the most cost-effective access tier without any operational
overhead.
Your selection is correct
It is designed for data that is accessed less frequently.
Your selection is incorrect
It provides high latency and low throughput performance
Correct selection
It is designed for data that requires rapid access when needed.
Ideal to use for data archiving.
Overall explanation
Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for
data that is accessed less frequently, but requires rapid access when needed. Standard - IA
offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per
GB storage price and per GB retrieval fee.
This combination of low cost and high performance make Standard - IA ideal for long-term
storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is
set at the object level and can exist in the same bucket as Standard, allowing you to use
lifecycle policies to automatically transition objects between storage classes without any
application changes.
Key Features:
- Same low latency and high throughput performance of Standard
- Designed for durability of 99.999999999% of objects
- Designed for 99.9% availability over a given year
- Backed with the Amazon S3 Service Level Agreement for availability
- Supports SSL encryption of data in transit and at rest
- Lifecycle management for automatic migration of objects
Hence, the correct answers are:
- It is designed for data that is accessed less frequently.
- It is designed for data that requires rapid access when needed.
The option that says: It automatically moves data to the most cost-effective access tier
without any operational overhead is incorrect because it actually refers to Amazon S3 -
Intelligent Tiering, which is the only cloud storage class that delivers automatic cost savings by
moving objects between different access tiers when access patterns change.
The option that says: It provides high latency and low throughput performance is incorrect
because it should just be "low latency" and "high throughput" instead. S3 automatically scales
performance to meet user demands.
The option that says: Ideal to use for data archiving is incorrect because this statement refers
to Amazon S3 Glacier. Glacier is a secure, durable, and extremely low-cost cloud storage
service for data archiving and long-term backup.
References:
https://aws.amazon.com/s3/storage-classes/
https://aws.amazon.com/s3/faqs
Check out this Amazon S3 Cheat Sheet:
https://tutorialsdojo.com/amazon-s3/
Domain
Design High-Performing Architectures
Question 53Incorrect
A company plans to migrate its suite of containerized applications running on-premises to a
container service in AWS. The solution must be cloud-agnostic and use an open-source
platform that can automatically manage containerized workloads and services. It should also
use the same configuration and tools across various production environments.
What should the Solution Architect do to properly migrate and satisfy the given requirement?
Migrate the application to Amazon Elastic Container Service with ECS tasks that use the
Amazon EC2 launch type.
Your answer is incorrect
Migrate the application to Amazon Elastic Container Service with ECS tasks that use the
AWS Fargate launch type.
Migrate the application to Amazon Container Registry (ECR) with Amazon EC2 instance
worker nodes.
Correct answer
Migrate the application to Amazon Elastic Kubernetes Service with EKS worker nodes.
Overall explanation
Amazon EKS provisions and scales the Kubernetes control plane, including the API servers
and backend persistence layer, across multiple AWS availability zones for high availability and
fault tolerance. Amazon EKS automatically detects and replaces unhealthy control plane nodes
and provides patching for the control plane. Amazon EKS is integrated with many AWS services
to provide scalability and security for your applications. These services include Elastic Load
Balancing for load distribution, IAM for authentication, Amazon VPC for isolation, and AWS
CloudTrail for logging.
To migrate the application to a container service, you can use Amazon ECS or Amazon EKS.
But the key point in this scenario is a cloud-agnostic and open-source platforms. Take note that
Amazon ECS is an AWS proprietary container service. This means that it is not an open-source
platform. Amazon EKS is a portable, extensible, and open-source platform for managing
containerized workloads and services. Kubernetes is considered cloud-agnostic because it
allows you to move your containers to other cloud service providers.
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use
all of the existing plugins and tools from the Kubernetes community. Applications running on
Amazon EKS are fully compatible with applications running on any standard Kubernetes
environment, whether running in on-premises data centers or public clouds. This means that
you can easily migrate any standard Kubernetes application to Amazon EKS without any code
modification required.
Hence, the correct answer is: Migrate the application to Amazon Elastic Kubernetes
Service with EKS worker nodes.
The option that says: Migrate the application to Amazon Container Registry (ECR) with
Amazon EC2 instance worker nodes is incorrect because Amazon ECR is just a fully-
managed Docker container registry. Also, this option is not an open-source platform that can
manage containerized workloads and services.
The option that says: Migrate the application to Amazon Elastic Container Service with
ECS tasks that use the AWS Fargate launch type is incorrect because it is stated in the
scenario that you have to migrate the application suite to an open-source platform. AWS
Fargate is just a serverless compute engine for containers. It is not cloud-agnostic since you
cannot use the same configuration and tools if you move it to another cloud service provider
such as Microsoft Azure or Google Cloud Platform (GCP).
The option that says: Migrate the application to Amazon Elastic Container Service with
ECS tasks that use the Amazon EC2 launch type is incorrect because Amazon ECS is an
AWS proprietary managed container orchestration service. You should use Amazon EKS since
Kubernetes is an open-source platform and is considered cloud-agnostic. With Kubernetes, you
can use the same configuration and tools that you're currently using in AWS even if you move
your containers to another cloud service provider.
References:
https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html
https://aws.amazon.com/eks/faqs/
Check out our library of AWS Cheat Sheets:
https://tutorialsdojo.com/links-to-all-aws-cheat-sheets/
Domain
Design High-Performing Architectures
Question 55Incorrect
A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each
other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The
company uses a single AWS Direct Connect connection and a virtual interface to connect their
on-premises network with VPC-1.
Which of the following options increase the fault tolerance of the connection to VPC-1? (Select
TWO.)
Your selection is incorrect
Establish a new AWS Direct Connect connection and private virtual interface in the same
region as VPC-2.
Correct selection
Establish another AWS Direct Connect connection and private virtual interface in the
same AWS region as VPC-1.
Establish a hardware VPN over the Internet between VPC-2 and the on-premises network.
Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private
virtual interface in the same region as VPC-2.
Your selection is correct
Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.
Overall explanation
In this scenario, you have two VPCs which have peering connections with each other. Note that
a VPC peering connection does not support edge to edge routing. This means that if either VPC
in a peering relationship has one of the following connections, you cannot extend the peering
relationship to that connection:
- A VPN connection or an AWS Direct Connect connection to a corporate network
- An Internet connection through an Internet gateway
- An Internet connection in a private subnet through a NAT device
- A gateway VPC endpoint to an AWS service; for example, an endpoint to Amazon S3.
- (IPv6) A ClassicLink connection. You can enable IPv4 communication between a linked EC2-
Classic instance and instances in a VPC on the other side of a VPC peering connection.
However, IPv6 is not supported in EC2-Classic, so you cannot extend this connection for IPv6
communication.
We can use a bucket policy or an endpoint policy to allow the traffic to trusted S3 buckets. The
options that have 'trusted S3 buckets' key phrases will be the possible answer in this scenario. It
would take you a lot of time to configure a bucket policy for each S3 bucket instead of using a
single endpoint policy. Therefore, you should use an endpoint policy to control the traffic to the
trusted Amazon S3 buckets.
Hence, the correct answer is: Generate an endpoint policy for trusted S3 buckets.
The option that says: Generate a bucket policy for trusted S3 buckets is incorrect. Although
this is a valid solution, it takes a lot of time to set up a bucket policy for each and every S3
bucket. This can be simplified by whitelisting access to trusted S3 buckets in a single S3
endpoint policy.
The option that says: Generate a bucket policy for trusted VPCs is incorrect because you are
generating a policy for trusted VPCs. Remember that the scenario only requires you to allow the
traffic for trusted S3 buckets, not to the VPCs.
The option that says: Generate an endpoint policy for trusted VPCs is incorrect because it
only allows access to trusted VPCs, and not to trusted Amazon S3 buckets.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html
https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/
Domain
Design Secure Architectures
Question 65Incorrect
A solutions architect is instructed to host a website that consists of HTML, CSS, and some
Javascript files. The web pages will display several high-resolution images. The website should
have optimal loading times and be able to respond to high request rates.
Which of the following architectures can provide the most cost-effective and fastest loading
experience?
Host the website using an Nginx server in an EC2 instance. Upload the images in an S3
bucket. Use CloudFront as a CDN to deliver the images closer to end-users.
Your answer is incorrect
Host the website in an AWS Elastic Beanstalk environment. Upload the images in an S3
bucket. Use CloudFront as a CDN to deliver the images closer to your end-users.
Launch an Auto Scaling Group using an AMI that has a pre-configured Apache web
server, then configure the scaling policy accordingly. Store the images in an Elastic
Block Store. Then, point your instance’s endpoint to AWS Global Accelerator.
Correct answer
Upload the HTML, CSS, Javascript, and the images in a single bucket. Then enable
website hosting. Create a CloudFront distribution and point the domain on the S3
website endpoint.
Overall explanation
Amazon S3 is an object storage service that offers industry-leading scalability, data availability,
security, and performance. Additionally, You can use Amazon S3 to host a static website. On a
static website, individual webpages include static content. Amazon S3 is highly scalable and
you only pay for what you use, you can start small and grow your application as you wish,
with no compromise on performance or reliability.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds. CloudFront can be integrated with Amazon S3 for fast delivery of data originating from
an S3 bucket to your end-users. By design, delivering data out of CloudFront can be more cost-
effective than delivering it from S3 directly to your users.
In the scenario, Since we are only dealing with static content, we can leverage the web hosting
feature of S3. Then we can improve the architecture further by integrating it with CloudFront.
This way, users will be able to load both the web pages and images faster than if we hosted
them on a webserver that we built from scratch.
Hence, the correct answer is: Upload the HTML, CSS, Javascript, and the images in a
single bucket. Then enable website hosting. Create a CloudFront distribution and point
the domain on the S3 website endpoint.
The option that says: Host the website using an Nginx server in an EC2 instance. Upload
the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to
end-users is incorrect. Creating your own web server to host a static website in AWS is a costly
solution. Web Servers on an EC2 instance are usually used for hosting applications that require
server-side processing (connecting to a database, data validation, etc.). Since static websites
contain web pages with fixed content, we should use S3 website hosting instead.
The option that says: Launch an Auto Scaling Group using an AMI that has a pre-
configured Apache web server, then configure the scaling policy accordingly. Store the
images in an Elastic Block Store. Then, point your instance’s endpoint to AWS Global
Accelerator is incorrect. This is how we serve static websites in the old days. Now, with the
help of S3 website hosting, we can host our static contents from a durable, high-availability, and
highly scalable environment without managing any servers. Hosting static websites in S3 is
cheaper than hosting it in an EC2 instance. In addition, Using ASG for scaling instances that
host a static website is an over-engineered solution that carries unnecessary costs. S3
automatically scales to high requests and you only pay for what you use.
The option that says: Host the website in an AWS Elastic Beanstalk environment. Upload
the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to
your end-users is incorrect. AWS Elastic Beanstalk simply sets up the infrastructure (EC2
instance, load balancer, auto-scaling group) for your application. It's a more expensive and a bit
of an overkill solution for hosting a bunch of client-side files.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-
a-match-made-in-the-cloud/
Check out these Amazon S3 and CloudFront Cheat Sheets:
https://tutorialsdojo.com/amazon-s3/
https://tutorialsdojo.com/amazon-cloudfront/
Domain
Design Cost-Optimized Architectures