0% found this document useful (0 votes)
19 views61 pages

Untitled Document

Uploaded by

s79871095
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views61 pages

Untitled Document

Uploaded by

s79871095
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 61

AWS Certified Solutions Architect Associate Practice Exams

Set a certification due date


AWS Certified Solutions Architect – Associate
● Start
● Practice Test 1: AWS Certified Solutions Architect Associate Practice Test 1
● Start
● Practice Test 2: AWS Certified Solutions Architect Associate Practice Test 2
● Start
● Practice Test 3: AWS Certified Solutions Architect Associate Practice Test 3
● Start
● Practice Test 4: AWS Certified Solutions Architect Associate Practice Test 4
● Start
● Practice Test 5: AWS Certified Solutions Architect Associate Practice Test 5
● Start
● Practice Test 6: AWS Certified Solutions Architect Associate Practice Test 6

















Practice Test 2: AWS Certified Solutions Architect Associate
Practice Test 2
AWS Certified Solutions Architect Associate Practice
Test 2 - Results
Attempt 1





Question 1Incorrect
A company has a static corporate website hosted in a standard Amazon S3 bucket and a new
web domain name that was registered using Amazon Route 53. There is a requirement to
integrate these two services in order to successfully launch the corporate website.
What are the prerequisites when routing traffic using Route 53 to a website that is hosted in an
S3 Bucket? (Select TWO.)
Correct selection
The S3 bucket name must be the same as the domain name.
The record set must be of type "MX".
Your selection is incorrect
The Cross-Origin Resource Sharing (CORS) option should be enabled in the S3 bucket.
Correct selection
A registered domain name
Your selection is incorrect
The S3 bucket must be in the same region as the hosted zone.
Overall explanation
Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service
designed to efficiently and reliably route end-user requests to Internet applications. As part of
the AWS (Amazon Web Services) cloud ecosystem, Route 53 provides a range of features for
managing domain names, DNS records, and directing web traffic.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
scalable, durable, and low-latency storage. It is commonly used to store and retrieve large
volumes of data, including documents, images, videos, backups, and log files.

Hence, the correct answers are:


- The S3 bucket name must be the same as the domain name.
- A registered domain name.
The option that says: The record set must be of type "MX" is incorrect because an MX record
specifies the mail server responsible for accepting email messages on behalf of a domain
name. This is just not what is being asked by the question.
The option that says: The S3 bucket must be in the same region as the hosted zone is
incorrect because there is no constraint that the S3 bucket must be in the same region as the
hosted zone in order for the Route 53 service to route traffic into it.
The option that says: The Cross-Origin Resource Sharing (CORS) option should be
enabled in the S3 bucket is incorrect because you only need to enable Cross-Origin Resource
Sharing (CORS) when your client web application on one domain interacts with the resources in
a different domain.
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html
Check out this Amazon Route 53 Cheat Sheet:
https://tutorialsdojo.com/amazon-route-53/
Domain
Design Resilient Architectures
Question 2Incorrect
A company has a top priority requirement to monitor certain database metrics and send email
notifications to the Operations team if any issues occur.
Which combination of AWS services can accomplish this requirement? (Select TWO.)
Amazon EC2 Instance with a running Berkeley Internet Name Domain (BIND) Server.
Your selection is incorrect
Amazon Simple Email Service
Your selection is correct
Amazon CloudWatch
Correct selection
Amazon Simple Notification Service (SNS)
Amazon Simple Queue Service (SQS)
Overall explanation
Amazon CloudWatch and Amazon Simple Notification Service (SNS) are correct. In this
requirement, you can use Amazon CloudWatch to monitor the database and then Amazon SNS
to send the emails to the Operations team. Take note that you should use SNS instead of SES
(Simple Email Service) when you want to monitor your EC2 instances.
CloudWatch collects monitoring and operational data in the form of logs, metrics, and events,
providing you with a unified view of AWS resources, applications, and services that run on
AWS, and on-premises servers.
SNS is a highly available, durable, secure, fully managed pub/sub messaging service that
enables you to decouple microservices, distributed systems, and serverless applications.
Amazon Simple Email Service is incorrect. SES is primarily designed for sending bulk emails,
transactional emails, and marketing communications, rather than system notifications.
Amazon Simple Queue Service (SQS) is incorrect. SQS is a fully-managed message queuing
service. It does not monitor applications nor send email notifications, unlike SES.
Amazon EC2 Instance with a running Berkeley Internet Name Domain (BIND) Server is
incorrect because BIND is primarily used as a Domain Name System (DNS) web service. This
is only applicable if you have a private hosted zone in your AWS account. It does not monitor
applications nor send email notifications.
References:
https://aws.amazon.com/cloudwatch/
https://aws.amazon.com/sns/
Check out this Amazon CloudWatch Cheat Sheet:
https://tutorialsdojo.com/amazon-cloudwatch/
Domain
Design Resilient Architectures
Question 4Incorrect
A GraphQL API hosted is hosted in an Amazon EKS cluster with AWS Fargate launch type and
deployed using AWS SAM. The API is connected to an Amazon DynamoDB table with
DynamoDB Accelerator (DAX) as its data store. Both resources are hosted in the us-east-1
region.
The AWS IAM authenticator for Kubernetes is integrated into the EKS cluster for role-based
access control (RBAC) and cluster authentication. A solutions architect must improve network
security by preventing database calls from traversing the public internet. An automated cross-
account backup for the DynamoDB table is also required for long-term retention.
Which of the following should the solutions architect implement to meet the requirement?
Create a DynamoDB interface endpoint. Set up a stateless rule using AWS Network
Firewall to control all outbound traffic to only use the dynamodb.us-east-
1.amazonaws.com endpoint. Integrate the DynamoDB table with Amazon Timestream to
allow point-in-time recovery from a different AWS account.
Create a DynamoDB interface endpoint. Associate the endpoint to the appropriate route
table. Enable Point-in-Time Recovery (PITR) to restore the DynamoDB table to a
particular point in time on the same or a different AWS account.
Correct answer
Create a DynamoDB gateway endpoint. Associate the endpoint to the appropriate route
table. Use AWS Backup to automatically copy the on-demand DynamoDB backups to
another AWS account for disaster recovery.
Your answer is incorrect
Create a DynamoDB gateway endpoint. Set up a Network Access Control List (NACL) rule
that allows outbound traffic to the dynamodb.us-east-1.amazonaws.com gateway
endpoint. Use the built-in on-demand DynamoDB backups for cross-account backup and
recovery.
Overall explanation
Since DynamoDB tables are public resources, applications within a VPC rely on an Internet
Gateway to route traffic to/from Amazon DynamoDB. You can use a Gateway endpoint if you
want to keep the traffic between your VPC and Amazon DynamoDB within the Amazon network.
This way, resources residing in your VPC can use their private IP addresses to access
DynamoDB with no exposure to the public internet.
When you create a DynamoDB Gateway endpoint, you specify the VPC where it will be
deployed as well as the route table that will be associated with the endpoint. The route table will
be updated with an Amazon DynamoDB prefix list (list of CIDR blocks) as the destination and
the endpoint's ID as the target.
DynamoDB on-demand backups are available at no additional cost beyond the normal pricing
that's associated with backup storage size. DynamoDB on-demand backups cannot be copied
to a different account or Region. To create backup copies across AWS accounts and Regions
and for other advanced features, you should use AWS Backup.
With AWS Backup, you can configure backup policies and monitor activity for your AWS
resources and on-premises workloads in one place. Using DynamoDB with AWS Backup, you
can copy your on-demand backups across AWS accounts and Regions, add cost allocation tags
to on-demand backups, and transition on-demand backups to cold storage for lower costs. To
use these advanced features, you must opt into AWS Backup. Opt-in choices apply to the
specific account and AWS Region, so you might have to opt into multiple Regions using the
same account.
Hence, the correct answer is: Create a DynamoDB gateway endpoint. Associate the
endpoint to the appropriate route table. Use AWS Backup to automatically copy the on-
demand DynamoDB backups to another AWS account for disaster recovery.
The option that says: Create a DynamoDB interface endpoint. Associate the endpoint to
the appropriate route table. Enable Point-in-Time Recovery (PITR) to restore the
DynamoDB table to a particular point in time on the same or a different AWS account is
incorrect. While this option addresses the network security requirement, Point-in-Time Recovery
(PITR) is only used for restoring a DynamoDB table to a specific point in time within the same
AWS account and region. It does not support cross-account backups or long-term retention. If
this functionality is needed, you have to use the AWS Backup service instead.
The option that says: Create a DynamoDB gateway endpoint. Set up a Network Access
Control List (NACL) rule that allows outbound traffic to the dynamodb.us-east-
1.amazonaws.com gateway endpoint. Use the built-in on-demand DynamoDB backups
for cross-account backup and recovery is incorrect because using a Network Access Control
List alone is not enough to prevent traffic traversing to the public Internet. Moreover, you cannot
copy DynamoDB on-demand backups to a different account or Region.
The option that says: Create a DynamoDB interface endpoint. Set up a stateless rule using
AWS Network Firewall to control all outbound traffic to only use the dynamodb.us-
east-1.amazonaws.com endpoint. Integrate the DynamoDB table with Amazon
Timestream to allow point-in-time recovery from a different AWS account is incorrect.
Keep in mind that the dynamodb.us-east-1.amazonaws.com is a public service endpoint
for Amazon DynamoDB. Since the application is able to communicate with Amazon DynamoDB
prior to the required architectural change, it's implied that no firewalls (security group, NACL,
etc.) are blocking traffic to/from Amazon DynamoDB, hence, adding an NACL rule to allow
outbound traffic to DynamoDB is unnecessary. Furthermore, the use of the AWS Network
Firewall in this solution is simply incorrect as you have to integrate this with your Amazon VPC.
The use of Amazon Timestream is also wrong since this is a time series database service in
AWS for IoT and operational applications. You cannot directly integrate DynamoDB and
Amazon Timestream for the purpose of point-in-time data recovery.
References:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-
dynamodb.html
https://aws.amazon.com/blogs/database/how-to-configure-a-private-network-environment-for-
amazon-dynamodb-using-vpc-endpoints/
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/BackupRestore.html
Check out this Amazon DynamoDB Cheat sheet:
https://tutorialsdojo.com/amazon-dynamodb
Domain
Design Secure Architectures
Question 10Incorrect
A company runs a messaging application in the ap-northeast-1 and ap-southeast-2
region. A Solutions Architect needs to create a routing policy wherein a larger portion of traffic
from the Philippines and North India will be routed to the resource in the ap-northeast-1
region.
Which Route 53 routing policy should the Solutions Architect use?
Your answer is incorrect
Geolocation Routing
Latency Routing
Correct answer
Geoproximity Routing
Weighted Routing
Overall explanation
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.
You can use Route 53 to perform three main functions in any combination: domain registration,
DNS routing, and health checking. After you create a hosted zone for your domain, such as
example.com, you create records to tell the Domain Name System (DNS) how you want traffic
to be routed for that domain.
For example, you might create records that cause DNS to do the following:
- Route Internet traffic for example.com to the IP address of a host in your data center.
- Route email for that domain ([email protected]) to a mail server
(mail.tutorialsdojo.com).
- Route traffic for a subdomain called operations.manila.tutorialsdojo.com to the IP address of a
different host.
Each record includes the name of a domain or a subdomain, a record type (for example, a
record with a type of MX routes email), and other information applicable to the record type (for
MX records, the hostname of one or more mail servers and a priority for each server).
Route 53 has different routing policies that you can choose from. Below are some of the
policies:
Latency Routing lets Amazon Route 53 serve user requests from the AWS Region that
provides the lowest latency. It does not, however, guarantee that users in the same geographic
region will be served from the same location.
Geoproximity Routing lets Amazon Route 53 route traffic to your resources based on the
geographic location of your users and your resources. You can also optionally choose to route
more traffic or less to a given resource by specifying a value, known as a bias. A bias expands
or shrinks the size of the geographic region from which traffic is routed to a resource.
Geolocation Routing lets you choose the resources that serve your traffic based on the
geographic location of your users, meaning the location that DNS queries originate from.
Weighted Routing lets you associate multiple resources with a single domain name
(tutorialsdojo.com) or subdomain name (subdomain.tutorialsdojo.com) and choose how much
traffic is routed to each resource.
In this scenario, the problem requires a routing policy that will let Route 53 route traffic to the
resource in the Tokyo region from a larger portion of the Philippines and North India.
You need to use Geoproximity Routing and specify a bias to control the size of the geographic
region from which traffic is routed to your resource. The sample image above uses a bias of -40
in the Tokyo region and a bias of 1 in the Sydney Region. Setting up the bias configuration in
this manner would cause Route 53 to route traffic coming from the middle and northern part of
the Philippines, as well as the northern part of India to the resource in the Tokyo Region.
Hence, the correct answer is Geoproximity Routing.
Geolocation Routing is incorrect because you cannot control the coverage size from which
traffic is routed to your instance in Geolocation Routing. It just lets you choose the instances
that will serve traffic based on the location of your users.
Latency Routing is incorrect because it is mainly used for improving performance by letting
Route 53 serve user requests from the AWS Region that provides the lowest latency.
Weighted Routing is incorrect because it is used for routing traffic to multiple resources in
proportions that you specify. This can be useful for load balancing and testing new versions of
software.
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-
policy-geoproximity
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/rrsets-working-with.html
Latency Routing vs. Geoproximity Routing vs. Geolocation Routing:
https://tutorialsdojo.com/latency-routing-vs-geoproximity-routing-vs-geolocation-routing/
Domain
Design Resilient Architectures
Question 12Incorrect
A company has an application that continually sends encrypted documents to Amazon S3. The
company requires that the configuration for data access is in line with its strict compliance
standards. It should also be alerted if there is any risk of unauthorized access or suspicious
access patterns.
Which step is needed to meet the requirements?
Correct answer
Use Amazon GuardDuty to monitor malicious activity on S3.
Your answer is incorrect
Use Amazon Inspector to alert whenever a security violation is detected on S3.
Use AWS CloudTrail to monitor and detect access patterns on S3.
Use Amazon Rekognition to monitor and recognize patterns on S3.
Overall explanation
Amazon GuardDuty can generate findings based on suspicious activities such as requests
coming from known malicious IP addresses, changing of bucket policies/ACLs to expose an S3
bucket publicly, or suspicious API call patterns that attempt to discover misconfigured bucket
permissions.
To detect possibly malicious behavior, GuardDuty uses a combination of anomaly detection,
machine learning, and continuously updated threat intelligence.
Hence, the correct answer is: Use Amazon GuardDuty to monitor malicious activity on S3.
The option that says: Use Amazon Rekognition to monitor and recognize patterns on S3 is
incorrect because Amazon Rekognition is simply a service that can identify the objects, people,
text, scenes, and activities on your images or videos, as well as detect any inappropriate
content.
The option that says: Use AWS CloudTrail to monitor and detect access patterns on S3 is
incorrect. While AWS CloudTrail can track API calls for your account, including calls made by
the AWS Management Console, AWS SDKs, command line tools, and other AWS services, its
primary function is not to monitor and detect access patterns on S3. It’s more about
governance, compliance, operational auditing, and risk auditing.
The option that says: Use Amazon Inspector to alert whenever a security violation is
detected on S3 is incorrect because Inspector is only an automated security assessment
service that helps improve the security and compliance of applications deployed on AWS.
References:
https://aws.amazon.com/guardduty/
https://aws.amazon.com/blogs/aws/new-using-amazon-guardduty-to-protect-your-s3-buckets/
Check out this Amazon GuardDuty Cheat Sheet:
https://tutorialsdojo.com/amazon-guardduty/
Domain
Design Secure Architectures
Question 17Incorrect
A Solutions Architect is building a cloud infrastructure where Amazon EC2 instances require
access to various AWS services, such as Amazon S3 and Amazon Redshift. The Architect will
also need to provide access to system administrators so that the system administrators can
deploy and test changes.
Which configuration should be used to ensure that access to the resources is secured and not
compromised? (Select TWO.)
Assign an IAM user for each EC2 Instance.
Your selection is correct
Assign an IAM role to the EC2 instance.
Store the AWS Access Keys in the EC2 instance.
Correct selection
Enable Multi-Factor Authentication.
Your selection is incorrect
Store the AWS Access Keys in ACM.
Overall explanation
In this scenario, the correct answers are:
- Enable Multi-Factor Authentication
- Assign an IAM role to the EC2 instance
Always remember that you should associate IAM roles with EC2 instances and not an IAM user,
for the purpose of accessing other AWS services. IAM roles are designed so that your
applications can securely make API requests from your instances, without requiring you to
manage the security credentials that the applications use. Instead of creating and distributing
your AWS credentials, you can delegate permission to make API requests using IAM roles.

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of
protection on top of your user name and password. With MFA enabled, when a user signs in to
an AWS website, they will be prompted for their user name and password (the first factor—what
they know), as well as for an authentication code from their AWS MFA device (the second factor
—what they have). Taken together, these multiple factors provide increased security for your
AWS account settings and resources. You can enable MFA for your AWS account and for
individual IAM users you have created under your account. MFA can also be used to control
access to AWS service APIs.
The option that says: Storing the AWS Access Keys in the EC2 instance is incorrect. This is
not only recommended by AWS as it can be compromised. Instead of storing access keys on an
EC2 instance for use by applications that run on the instance and make AWS API requests, you
can use an IAM role to provide temporary access keys for these applications.
The option that says: Assigning an IAM user for each EC2 Instance is incorrect because
there is no need to create an IAM user for this scenario since IAM roles already provide greater
flexibility and easier management.
The option that says: Storing the AWS Access Keys in ACM is incorrect because ACM is just
a service that lets you easily provision, manage, and deploy public and private SSL/TLS
certificates for use with AWS services and your internal connected resources. It is not used as a
secure storage for your access keys.
References:
https://aws.amazon.com/iam/details/mfa/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Check out this AWS IAM Cheat Sheet:
https://tutorialsdojo.com/aws-identity-and-access-management-iam/
Domain
Design Secure Architectures
Question 18Incorrect
An organization is currently using a tape backup solution to store its application data on-
premises. Plans are in place to use a cloud storage service to preserve the backup data for up
to 10 years, which may be accessed about once or twice a year.
Which of the following is the most cost-effective option to implement this solution?
Use Amazon S3 to store the backup data and add a lifecycle rule to transition the current
version to Amazon S3 Glacier.
Correct answer
Use AWS Storage Gateway to backup the data and transition it to Amazon S3 Glacier
Deep Archive.
Use AWS Storage Gateway to backup the data directly to Amazon S3 Glacier Flexible
Retrieval.
Your answer is incorrect
Order an AWS Snowball Edge appliance to import the backup directly to Amazon S3
Glacier Flexible Retrieval.
Overall explanation
Tape Gateway enables you to replace physical tapes on-premises with virtual tapes in AWS
without changing existing backup workflows. Tape Gateway supports all leading backup
applications and caches virtual tapes on-premises for low-latency data access. Tape Gateway
encrypts data between the gateway and AWS for secure data transfer and compresses data
and transitions virtual tapes between Amazon S3 and Amazon S3 Glacier Flexible Retrieval, or
Amazon S3 Glacier Deep Archive, to minimize storage costs.
The scenario requires you to back up your application data to a cloud storage service for long-
term retention of data that will be retained for 10 years. Given that the organization uses a tape
backup solution, an option that uses AWS Storage Gateway must be the possible answer. Tape
Gateway can transition virtual tapes archived in Amazon S3 Glacier Flexible Retrieval or
Amazon S3 Glacier Deep Archive storage class, enabling you to further reduce the monthly cost
to store long-term data in the cloud by up to 75%.
Hence, the correct answer is: Use AWS Storage Gateway to backup the data and transition
it to Amazon S3 Glacier Deep Archive.
The option that says: Use AWS Storage Gateway to backup the data directly to Amazon S3
Glacier Flexible Retrieval is incorrect. Although this is a valid solution, moving to S3 Glacier
Flexible Retrieval is typically more expensive than directly backing it up to Glacier Deep Archive.
The option that says: Order an AWS Snowball Edge appliance to import the backup
directly to Amazon S3 Glacier Flexible Retrieval is incorrect because Snowball Edge can't
directly integrate backups to S3 Glacier Flexible Retrieval. Moreover, you have to primarily use
the Amazon S3 Glacier Deep Archive storage class as it is more cost-effective than the Glacier
Flexible Retrieval.
The option that says: Use Amazon S3 to store the backup data and add a lifecycle rule to
transition the current version to S3 Glacier Flexible Retrieval is incorrect. Although this is a
possible solution, it is difficult to directly integrate a tape backup solution to S3 without using
Storage Gateway. Additionally, S3 Glacier Deep Archive is the most cost-effective storage class
for long-term retention.
References:
https://aws.amazon.com/storagegateway/faqs/
https://aws.amazon.com/s3/storage-classes/
Check out this AWS Storage Gateway Cheat Sheet:
https://tutorialsdojo.com/aws-storage-gateway/
Domain
Design Cost-Optimized Architectures
Question 20Incorrect
A company is building an internal application that allows users to upload images. Each upload
request must be sent to Amazon Kinesis Data Streams for processing before the pictures are
stored in an Amazon S3 bucket.
The application should immediately return a success message to the user after the upload,
while the downstream processing is handled asynchronously. The processing typically takes
about 5 minutes to complete.
Which solution will enable asynchronous processing from Kinesis to S3 in the most cost-
effective way?
Correct answer
Use Kinesis Data Streams with AWS Lambda consumers to asynchronously process
records and write them to S3.
Use a combination of AWS Lambda and Step Functions to orchestrate service
components and asynchronously process the requests.
Your answer is incorrect
Send data from Kinesis Data Streams to Amazon Data Firehose and configure it to
deliver directly to S3.
Use a combination of Amazon SQS to queue the requests and then asynchronously
process them using On-Demand Amazon EC2 Instances.
Overall explanation
AWS Lambda supports both synchronous and asynchronous invocation of functions. When
AWS services are used as event sources, the invocation type is predetermined and cannot be
changed. For Amazon Kinesis Data Streams, AWS Lambda uses an event source mapping to
poll the stream, batch records, invoke the function, and manage checkpoints and retries.

By combining Kinesis Data Streams with Lambda consumers, the application can immediately
return a success message to the user after placing the upload request into the stream. Lambda
then processes the records asynchronously and stores the results in Amazon S3. This
serverless integration is cost-effective because Lambda scales automatically, requires no server
management, and only incurs costs for the provisioned shards and actual function invocations.
Hence, the correct answer is: Use Kinesis Data Streams with AWS Lambda consumers to
asynchronously process records and write them to S3.
The option that says: Use a combination of AWS Lambda and Step Functions to
orchestrate service components and asynchronously process the requests is incorrect
because Step Functions is an orchestration service (branching, sequencing, human approval,
retries) and adds per–state transition cost without a workflow need here. A direct Kinesis Data
Streams to Lambda (event source mapping) to S3 pipeline already handles polling, batching,
and retries, making Step Functions unnecessary and less cost-effective.
The option that says: Use a combination of Amazon SQS to queue the requests and then
asynchronously process them using On-Demand Amazon EC2 Instances is incorrect
because it simply violates the requirement that uploads “must be sent to Kinesis Data Streams,”
and it replaces serverless stream processing with EC2, which requires provisioning and
managing instances (capacity, patching, scaling), reducing cost-efficiency compared with
Lambda consumers on Kinesis.
The option that says: Send data from Kinesis Data Streams to Amazon Data Firehose and
configure it to deliver directly to S3 is incorrect because Data Firehose is a managed delivery
service optimized for buffering and loading data to destinations (like S3). While it supports
optional Lambda-based transformations, they are typically synchronous and constrained by
buffering and a 5-minute invocation limit. It is a brittle fit for 5-minute processing and offers less
flexibility than a Lambda consumer on Kinesis for application logic.
References:
https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html
https://aws.amazon.com/blogs/compute/new-aws-lambda-controls-for-stream-processing-and-
asynchronous-invocations/
Check out this AWS Lambda Cheat Sheet:
https://tutorialsdojo.com/aws-lambda/
Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide:
https://tutorialsdojo.com/aws-certified-solutions-architect-associate-saa-c03/
Domain
Design Cost-Optimized Architectures
Question 21Incorrect
A company is implementing its Business Continuity Plan. As part of this initiative, the IT Director
instructed the IT team to set up an automated backup of all the Amazon EBS volumes attached
to the company’s Amazon EC2 instances. The solution must be implemented as soon as
possible and should be both cost-effective and simple to maintain.
What is the fastest and most cost-effective solution to automatically back up all of the EBS
volumes?
Your answer is incorrect
Use an EBS snapshot retention rule in AWS Backup to automatically manage snapshot
retention and expiration.
For an automated solution, create a scheduled job that calls the "create-snapshot"
command via the AWS CLI to take a snapshot of production EBS volumes periodically.
Set your Amazon Storage Gateway with EBS volumes as the data source and store the
backups in your on-premises servers through the storage gateway.
Correct answer
Use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation of EBS
snapshots.
Overall explanation
Amazon Data Lifecycle Manager (DLM) automates the creation, retention, and deletion of
Amazon Elastic Block Store (EBS) snapshots. It simplifies EBS volume management by
allowing you to define policies that govern the lifecycle of these snapshots, ensuring regular
backups are created and obsolete snapshots are automatically removed.
You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation,
retention, and deletion of snapshots taken to back up your Amazon EBS volumes. Automating
snapshot management helps you to:
- Protect valuable data by enforcing a regular backup schedule.
- Retain backups as required by auditors or internal compliance.
- Reduce storage costs by deleting outdated backups.

Combined with the monitoring features of Amazon EventBridge and AWS CloudTrail, Amazon
DLM provides a complete backup solution for EBS volumes at no additional cost.
Hence, the correct answer is: Use Amazon Data Lifecycle Manager (Amazon DLM) to
automate the creation of EBS snapshots.
The option that says: For an automated solution, create a scheduled job that calls the
"create-snapshot" command via the AWS CLI to take a snapshot of production EBS
volumes periodically is incorrect because even though this is just a valid solution, you would
still need additional time to create a scheduled job that calls the "create-snapshot" command. It
would be better to use Amazon Data Lifecycle Manager (Amazon DLM) instead, as this
provides you with the fastest solution, which enables you to automate the creation, retention,
and deletion of the EBS snapshots without having to write custom shell scripts or creating
scheduled jobs.
The option that says: Set your Amazon Storage Gateway with EBS volumes as the data
source and store the backups in your on-premises servers through the storage gateway
is incorrect as the Amazon Storage Gateway is only used for creating a backup of data from
your on-premises server and not from the Amazon Virtual Private Cloud.
The option that says: Use an EBS snapshot retention rule in AWS Backup to automatically
manage snapshot retention and expiration is incorrect. AWS Backup can manage retention
rules, but is typically used for centralized backup across multiple AWS services. It is not just
focused on automating EBS snapshot creation. While possible, it is not the most cost-effective
or straightforward solution for this specific use case.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html
Check out this Amazon EBS Cheat Sheet:
https://tutorialsdojo.com/amazon-ebs/
Domain
Design Resilient Architectures
Question 23Incorrect
Both historical records and frequently accessed data are stored on an on-premises storage
system. The amount of current data is growing at an exponential rate. As the storage’s capacity
is nearing its limit, the company’s Solutions Architect has decided to move the historical records
to AWS to free up space for the active data.
Which of the following architectures deliver the best solution in terms of cost and operational
management?
Use AWS DataSync to move the historical records from on-premises to AWS. Choose
Amazon S3 Standard to be the destination for the data. Modify the S3 lifecycle
configuration to move the data from the Standard tier to Amazon S3 Glacier Deep
Archive after 30 days.
Correct answer
Use AWS DataSync to move the historical records from on-premises to AWS. Choose
Amazon S3 Glacier Deep Archive to be the destination for the data.
Your answer is incorrect
Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle
configuration to move the data from the Standard tier to Amazon S3 Glacier Deep
Archive after 30 days.
Overall explanation
AWS DataSync makes it simple and fast to move large amounts of data online between on-
premises storage and Amazon S3, Amazon Elastic File System (Amazon EFS), or Amazon FSx
for Windows File Server. Manual tasks related to data transfers can slow down migrations and
burden IT operations. DataSync eliminates or automatically handles many of these tasks,
including scripting copy jobs, scheduling, and monitoring transfers, validating data, and
optimizing network utilization. The DataSync software agent connects to your Network File
System (NFS), Server Message Block (SMB) storage, and your self-managed object storage, so
you don’t have to modify your applications.
DataSync can transfer hundreds of terabytes and millions of files at speeds up to 10 times faster
than open-source tools, over the Internet or AWS Direct Connect links. You can use DataSync
to migrate active data sets or archives to AWS, transfer data to the cloud for timely analysis and
processing, or replicate data to AWS for business continuity. Getting started with DataSync is
easy: deploy the DataSync agent, connect it to your file system, select your AWS storage
resources, and start moving data between them. You pay only for the data you move.

Since the problem is mainly about moving historical records from on-premises to AWS, using
AWS DataSync is a more suitable solution. You can use DataSync to move cold data from
expensive on-premises storage systems directly to durable and secure long-term storage, such
as Amazon S3 Glacier or Amazon S3 Glacier Deep Archive.
Hence, the correct answer is the option that says: Use AWS DataSync to move the historical
records from on-premises to AWS. Choose Amazon S3 Glacier Deep Archive to be the
destination for the data.
The following options are both incorrect:
- Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier Deep Archive to be the destination for the data.
- Use AWS Storage Gateway to move the historical records from on-premises to AWS.
Choose Amazon S3 Glacier to be the destination for the data. Modify the S3 lifecycle
configuration to move the data from the Standard tier to Amazon S3 Glacier Deep
Archive after 30 days.
Although you can copy data from on-premises to AWS with Storage Gateway, it is not suitable
for transferring large sets of data to AWS. Storage Gateway is mainly used in providing low-
latency access to data by caching frequently accessed data on-premises while storing archive
data securely and durably in Amazon cloud storage services. Storage Gateway optimizes data
transfer to AWS by sending only changed data and compressing data.
The option that says: Use AWS DataSync to move the historical records from on-premises
to AWS. Choose Amazon S3 Standard to be the destination for the data. Modify the S3
lifecycle configuration to move the data from the Standard tier to Amazon S3 Glacier
Deep Archive after 30 days is incorrect because, with AWS DataSync, you can transfer data
from on-premises directly to Amazon S3 Glacier Deep Archive. You don’t have to configure the
S3 lifecycle policy and wait for 30 days to move the data to Glacier Deep Archive.
References:
https://aws.amazon.com/datasync/faqs/
https://aws.amazon.com/storagegateway/faqs/
Check out these AWS DataSync and Storage Gateway Cheat Sheets:
https://tutorialsdojo.com/aws-datasync/
https://tutorialsdojo.com/aws-storage-gateway/
Domain
Design Cost-Optimized Architectures
Question 27Incorrect
An insurance company plans to implement a message filtering feature in its web application. To
implement this solution, it needs to create separate Amazon SQS queues for each type of quote
request. The entire message processing should not exceed 24 hours.
As the Solutions Architect for the company, which of the following solutions should be
implemented to meet the above requirement?
Correct answer
Create one Amazon SNS topic and configure the SQS queues to subscribe to the SNS
topic. Set the filter policies in the SNS subscriptions to publish the message to the
designated SQS queue based on its quote request type.
Create one Amazon SNS topic and configure the SQS queues to subscribe to the SNS
topic. Publish the same messages to all SQS queues. Filter the messages in each queue
based on the quote request type.
Your answer is incorrect
Create multiple Amazon SNS topics and configure the SQS queues to subscribe to the
SNS topics. Publish the message to the designated SQS queue based on the quote
request type.
Create a data stream in Amazon Kinesis Data Streams. Use the Kinesis Client Library to
deliver all the records to the designated SQS queues based on the quote request type.
Overall explanation
Amazon SNS is a fully managed pub/sub messaging service. With Amazon SNS, you can use
topics to simultaneously distribute messages to multiple subscribing endpoints such as Amazon
SQS queues, AWS Lambda functions, HTTP endpoints, email addresses, and mobile devices
(SMS, Push).
Amazon SQS is a message queue service used by distributed applications to exchange
messages through a polling model. It can be used to decouple sending and receiving
components without requiring each component to be concurrently available.
A fanout scenario occurs when a message published to an SNS topic is replicated and pushed
to multiple endpoints, such as Amazon SQS queues, HTTP(S) endpoints, and Lambda
functions. This allows for parallel asynchronous processing.

For example, you can develop an application that publishes a message to an SNS topic
whenever an order is placed for a product. Then, two or more SQS queues that are subscribed
to the SNS topic receive identical notifications for the new order. An Amazon Elastic Compute
Cloud (Amazon EC2) server instance attached to one of the SQS queues can handle the
processing or fulfillment of the order. And you can attach another Amazon EC2 server instance
to a data warehouse for analysis of all orders received.
By default, an Amazon SNS topic subscriber receives every message published to the topic.
You can use Amazon SNS message filtering to assign a filter policy to the topic subscription,
and the subscriber will only receive a message that they are interested in. Using Amazon SNS
and Amazon SQS together, messages can be delivered to applications that require immediate
notification of an event. This method is known as fanout to Amazon SQS queues.
Hence, the correct answer is: Create one Amazon SNS topic and configure the SQS queues
to subscribe to the SNS topic. Set the filter policies in the SNS subscriptions to publish
the message to the designated SQS queue based on its quote request type.
The option that says: Create one Amazon SNS topic and configure the SQS queues to
subscribe to the SNS topic. Publish the same messages to all SQS queues. Filter the
messages in each queue based on the quote request type is incorrect because this will only
distribute the same messages on all SQS queues instead of its designated queue. You need to
fan-out the messages to multiple SQS queues using a filter policy in Amazon SNS subscriptions
to allow parallel asynchronous processing. By doing so, the entire message processing will not
exceed 24 hours.
The option that says: Create multiple Amazon SNS topics and configure the SQS queues
to subscribe to the SNS topics. Publish the message to the designated SQS queue based
on the quote request type is incorrect because to implement the solution asked in the
scenario, you only need to use one Amazon SNS topic. To publish it to the designated SQS
queue, you must set a filter policy that allows you to fan out the messages. If you didn't set a
filter policy in Amazon SNS, the subscribers would receive all the messages published to the
SNS topic. Thus, using multiple SNS topics is not an appropriate solution for this scenario.
The option that says:Create a data stream in Amazon Kinesis Data Streams. Use the
Kinesis Client Library to deliver all the records to the designated SQS queues based on
the quote request type is incorrect because Amazon KDS is not a message filtering service.
You should use Amazon SNS and SQS to distribute the topic to the designated queue.
References:
https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/
https://docs.aws.amazon.com/sns/latest/dg/sns-message-filtering.html
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
Check out these Amazon SNS and SQS Cheat Sheets:
https://tutorialsdojo.com/amazon-sns/
https://tutorialsdojo.com/amazon-sqs/
Domain
Design High-Performing Architectures
Question 29Incorrect
A company has multiple VPCs with IPv6 enabled for its suite of web applications. The Solutions
Architect attempted to deploy a new Amazon EC2 instance but encountered an error indicating
that there were no available IP addresses on the subnet. The VPC has a combination of IPv4
and IPv6 CIDR blocks, but the IPv4 CIDR blocks are nearing exhaustion. The architect needs a
solution that will resolve this issue while allowing future scalability.
How should the Solutions Architect resolve this problem?
Your answer is incorrect
Set up a new IPv4 subnet with a larger CIDR range. Associate the new subnet with the
VPC and then launch the instance.
Ensure that the VPC has IPv6 CIDRs only. Remove any IPv4 CIDRs associated with the
VPC.
Disable the IPv4 support in the VPC and use the available IPv6 addresses.
Correct answer
Set up a new IPv6-only subnet with a large CIDR range. Associate the new subnet with
the VPC then launch the instance.
Overall explanation
Amazon Virtual Private Cloud (VPC) is a service that lets you launch AWS resources in a
logically isolated virtual network that you define. You have complete control over your virtual
networking environment, including selection of your own IP address range, creation of subnets,
and configuration of route tables and network gateways. You can use both IPv4 and IPv6 for
most resources in your virtual private cloud, helping to ensure secure and easy access to
resources and applications.
A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a
specified subnet. When you create a VPC, you must specify a range of IPv4 addresses for the
VPC in the form of a CIDR block. Each subnet must reside entirely within one Availability Zone
and cannot span zones. You can also optionally assign an IPv6 CIDR block to your VPC, and
assign IPv6 CIDR blocks to your subnets.

If you have an existing VPC that supports IPv4 only and resources in your subnet are
configured to use IPv4 only, you can enable IPv6 support for your VPC and resources. Your
VPC can operate in dual-stack mode, your resources can communicate over IPv4, or IPv6, or
both. IPv4 and IPv6 communication are independent of each other. You cannot disable IPv4
support for your VPC and subnets since this is the default IP addressing system for Amazon
VPC and Amazon EC2.
The scenario describes a situation where the company's IPv4 CIDR blocks are nearly
exhausted, and the Solutions Architect needs a solution that allows for future scalability. The
VPC is already IPv6-enabled. The most efficient and forward-thinking solution is to leverage the
available IPv6 address space to solve the problem of IPv4 address exhaustion. By creating an
IPv6-only subnet, the architect can launch the new EC2 instance without using any of the
scarce IPv4 addresses, thus resolving the immediate issue and ensuring long-term scalability.
Hence, the correct answer is: Set up a new IPv6-only subnet with a large CIDR range.
Associate the new subnet with the VPC then launch the instance.
The option that says: Set up a new IPv4 subnet with a larger CIDR range. Associate the
new subnet with the VPC and then launch the instance is incorrect because it is not a
scalable, long-term solution. While creating a new IPv4 subnet would temporarily solve the
immediate problem of address exhaustion, it does not address the fundamental issue of the
limited IPv4 address space. The company would eventually face the same problem again. This
approach fails to meet the requirement for future scalability and is a temporary fix rather than a
sustainable strategy.
The option that says: Ensure that the VPC has IPv6 CIDRs only. Remove any IPv4 CIDRs
associated with the VPC is incorrect because it is not technically possible. Accordingly, a VPC
is an IPv4-based network by default. You cannot create a VPC that is IPv6-only, nor can you
remove all IPv4 CIDR blocks from an existing VPC. You can, however, use an IPv6 CIDR block
alongside an IPv4 block in a dual-stack configuration, or create IPv6-only subnets within a dual-
stack VPC.
The option that says: Disable the IPv4 support in the VPC and use the available IPv6
addresses is incorrect because you cannot disable IPv4 support for a VPC. IPv4 is the default
and a mandatory component of a VPC. Disabling it would break all existing services that rely on
IPv4 connectivity and would violate the fundamental design of the VPC service. The correct
approach is to utilize IPv6 for new resources while maintaining IPv4 for existing services as part
of a migration strategy.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html
https://aws.amazon.com/vpc/faqs/
Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/
Domain
Design Resilient Architectures
Question 31Incorrect
A company has an on-premises MySQL database that needs to be replicated in Amazon S3 as
CSV files. The database will eventually be launched to an Amazon Aurora Serverless cluster
and be integrated with an RDS Proxy to allow the web applications to pool and share database
connections. Once data has been fully copied, the ongoing changes to the on-premises
database should be continually streamed into the S3 bucket. The company wants a solution that
can be implemented with little management overhead yet still highly secure.
Which ingestion pattern should a solutions architect take?
Use AWS Schema Conversion Tool (AWS SCT) to convert MySQL data to CSV files. Set
up the AWS Application Migration Service (AWS MGN) to capture ongoing changes from
the on-premises MySQL database and send them to Amazon S3.
Your answer is incorrect
Use an AWS Snowball Edge cluster to migrate data to Amazon S3 and AWS DataSync to
capture ongoing changes. Create your own custom AWS KMS envelope encryption key
for the associated AWS Snowball Edge job.
Correct answer
Create a full load and change data capture (CDC) replication task using AWS Database
Migration Service (AWS DMS). Add a new Certificate Authority (CA) certificate and create
an AWS DMS endpoint with SSL.
Set up a full load replication task using AWS Database Migration Service (AWS DMS).
Launch an AWS DMS endpoint with SSL using the AWS Network Firewall service.
Overall explanation
AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate
relational databases, data warehouses, NoSQL databases, and other types of data stores. You
can use AWS DMS to migrate your data into the AWS Cloud, between on-premises instances
(through an AWS Cloud setup) or between combinations of cloud and on-premises setups. With
AWS DMS, you can perform one-time migrations, and you can replicate ongoing changes to
keep sources and targets in sync.
You can migrate data to Amazon S3 using AWS DMS from any of the supported database
sources. When using Amazon S3 as a target in an AWS DMS task, both full load and change
data capture (CDC) data is written to comma-separated value (.csv) format by default.
The comma-separated value (.csv) format is the default storage format for Amazon S3 target
objects. For more compact storage and faster queries, you can instead use Apache Parquet
(.parquet) as the storage format.
You can encrypt connections for source and target endpoints by using Secure Sockets Layer
(SSL). To do so, you can use the AWS DMS Management Console or AWS DMS API to assign
a certificate to an endpoint. You can also use the AWS DMS console to manage your
certificates.
Not all databases use SSL in the same way. Amazon Aurora MySQL-Compatible Edition uses
the server name, the endpoint of the primary instance in the cluster, as the endpoint for SSL. An
Amazon Redshift endpoint already uses an SSL connection and does not require an SSL
connection set up by AWS DMS.
Hence, the correct answer is: Create a full load and change data capture (CDC) replication
task using AWS Database Migration Service (AWS DMS). Add a new Certificate Authority
(CA) certificate and create an AWS DMS endpoint with SSL.
The option that says: Set up a full load replication task using AWS Database Migration
Service (AWS DMS). Launch an AWS DMS endpoint with SSL using the AWS Network
Firewall service is incorrect because a full load replication task alone won't capture ongoing
changes to the database. You still need to implement a change data capture (CDC) replication
to copy the recent changes after the migration. Moreover, the AWS Network Firewall service is
not capable of creating an AWS DMS endpoint with SSL. The Certificate Authority (CA)
certificate can be directly uploaded to the AWS DMS console without the AWS Network Firewall
at all.
The option that says: Use an AWS Snowball Edge cluster to migrate data to Amazon S3
and AWS DataSync to capture ongoing changes is incorrect. While this is doable, it's more
suited to the migration of large databases which require the use of two or more Snowball Edge
appliances. Also, the usage of AWS DataSync for replicating ongoing changes to Amazon S3
requires extra steps that can be simplified with AWS DMS.
The option that says: Use AWS Schema Conversion Tool (AWS SCT) to convert MySQL
data to CSV files. Set up the AWS Application Migration Service (AWS MGN) to capture
ongoing changes from the on-premises MySQL database and send them to Amazon S3 is
incorrect. AWS SCT is not used for data replication; it just eases up the conversion of source
databases to a format compatible with the target database when migrating. In addition, using
the AWS Application Migration Service (AWS MGN) for this scenario is inappropriate. This
service is primarily used for lift-and-shift migrations of applications from physical infrastructure,
VMware vSphere, Microsoft Hyper-V, Amazon Elastic Compute Cloud (AmazonEC2), Amazon
Virtual Private Cloud (Amazon VPC), and other clouds to AWS.
References:
https://aws.amazon.com/blogs/big-data/loading-ongoing-data-lake-changes-with-aws-dms-and-
aws-glue/
https://docs.aws.amazon.com/dms/latest/userguide/Welcome.html
https://docs.aws.amazon.com/dms/latest/userguide/
CHAP_Security.html#CHAP_Security.SSL.Limitations
Check out this AWS Database Migration Service Cheat Sheet:
https://tutorialsdojo.com/aws-database-migration-service/
Domain
Design High-Performing Architectures
Question 34Incorrect
A media company wants to ensure that the images it delivers through Amazon CloudFront are
compatible across various user devices. The company plans to serve images in WebP format to
user agents that support it and return to JPEG format for those that don't. Additionally, the
company wants to add a custom header to the response for tracking purposes.
As a solution architect, what approach would one recommend to meet these requirements while
minimizing operational overhead?
Implement an image conversion service on Amazon EC2 instances and integrate it with
CloudFront. Use AWS Lambda functions to modify the response headers and serve the
appropriate format based on the User-Agent header.
Generate a CloudFront response headers policy. Utilize the policy to deliver the suitable
image format according to the User-Agent HTTP header in the incoming request.
Correct answer
Configure CloudFront behaviors to handle different image formats based on the User-
Agent header. Use Lambda@Edge functions to modify the response headers and serve
the appropriate format.
Your answer is incorrect
Create multiple CloudFront distributions, each serving a specific image format (WebP or
JPEG). Route incoming requests based on the User-Agent header to the respective
distribution using Amazon Route 53.
Overall explanation
Amazon CloudFront is a content delivery network (CDN) service that enables the efficient
distribution of web content to users across the globe. It reduces latency by caching static and
dynamic content in multiple edge locations worldwide and improves the overall user experience.
Lambda@Edge allows you to run Lambda functions at the edge locations of the CloudFront
CDN. With this, you can perform various tasks, such as modifying HTTP headers, generating
dynamic responses, implementing security measures, and customizing content based on user
preferences, device type, location, or other criteria.

When a request is made to a CloudFront distribution, Lambda@Edge enables you to intercept


the request and execute custom code before CloudFront processes it. Similarly, you can
intercept the response generated by CloudFront and modify it before it's returned to the viewer.
In the given scenario, Lambda@Edge can be used to dynamically serve different image formats
based on the User-agent header received by CloudFront. Additionally, you can inject custom
response headers before CloudFront returns the response to the viewer.
Hence the correct answer is: Configure CloudFront behaviors to handle different image
formats based on the User-Agent header. Use Lambda@Edge functions to modify the
response headers and serve the appropriate format.
The option that says: Create multiple CloudFront distributions, each serving a specific
image format (WebP or JPEG). Route incoming requests based on the User-Agent header
to the respective distribution using Amazon Route 53 is incorrect because creating multiple
CloudFront distributions for each image format is unnecessary and just increases operational
overhead.
The option that says: Generate a CloudFront response headers policy. Utilize the policy to
deliver the suitable image format according to the User-Agent HTTP header in the
incoming request is incorrect. CloudFront response headers policies simply tell which HTTP
headers should be included or excluded in the responses sent by CloudFront. You cannot use
them to dynamically select and serve image formats based on the User-agent.
The option that says: Implement an image conversion service on Amazon EC2 instances
and integrate it with CloudFront. Use AWS Lambda functions to modify the response
headers and serve the appropriate format based on the User-Agent header is incorrect.
Building an image conversion service using EC2 instances requires additional operational
management. You can instead use Lambda@Edge functions to modify response headers and
serve the correct image format based on the User-agent header.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-
edge.html
https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html
Check out these AWS Lambda and Amazon CloudFront Cheat Sheets:
https://tutorialsdojo.com/aws-lambda/
https://tutorialsdojo.com/amazon-cloudfront/
Domain
Design High-Performing Architectures
Question 35Incorrect
A software company has resources hosted in AWS and on-premises servers. You have been
requested to create a decoupled architecture for applications which make use of both
resources.

Which of the following options are valid? (Select TWO.)


Use RDS to utilize both on-premises servers and EC2 instances for your decoupled
application
Use VPC peering to connect both on-premises servers and EC2 instances for your
decoupled application
Use DynamoDB to utilize both on-premises servers and EC2 instances for your
decoupled application
Your selection is correct
Use SQS to utilize both on-premises servers and EC2 instances for your decoupled
application
Correct selection
Use SWF to utilize both on-premises servers and EC2 instances for your decoupled
application
Overall explanation
Amazon Simple Queue Service (SQS) and Amazon Simple Workflow Service (SWF) are
the services that you can use for creating a decoupled architecture in AWS. Decoupled
architecture is a type of computing architecture that enables computing components or layers to
execute independently while still interfacing with each other.
Amazon SQS offers reliable, highly-scalable hosted queues for storing messages while they
travel between applications or microservices. Amazon SQS lets you move data between
distributed application components and helps you decouple these components. Amazon SWF is
a web service that makes it easy to coordinate work across distributed application components.
Using RDS to utilize both on-premises servers and EC2 instances for your decoupled
application and using DynamoDB to utilize both on-premises servers and EC2 instances
for your decoupled application are incorrect as RDS and DynamoDB are database services.
Using VPC peering to connect both on-premises servers and EC2 instances for your
decoupled application is incorrect because you can't create a VPC peering for your on-
premises network and AWS VPC.

References:
https://aws.amazon.com/sqs/
http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-welcome.html

Check out this Amazon SQS Cheat Sheet:


https://tutorialsdojo.com/amazon-sqs/

Amazon Simple Workflow (SWF) vs AWS Step Functions vs Amazon SQS:


https://tutorialsdojo.com/amazon-simple-workflow-swf-vs-aws-step-functions-vs-amazon-sqs/

Comparison of AWS Services Cheat Sheets:


https://tutorialsdojo.com/comparison-of-aws-services/
Domain
Design High-Performing Architectures
Question 36Incorrect
A startup has multiple AWS accounts that are assigned to its development teams. Since the
company is projected to grow rapidly, the management wants to consolidate all of its AWS
accounts into a multi-account setup. To simplify the login process on the AWS accounts, the
management wants to utilize its existing directory service for user authentication
Which combination of actions should a solutions architect recommend to meet these
requirements? (Select TWO.)
Your selection is incorrect
Create Service Control Policies (SCP) in the organization to manage the child accounts.
Configure AWS IAM Identity Center (AWS Single Sign-On) to use AWS Directory Service.
Your selection is correct
Configure AWS IAM Identity Center (AWS Single Sign-On) for the organization and
integrate it with the company’s directory service using the Active Directory Connector
Create an identity pool on Amazon Cognito and configure it to use the company’s
directory service. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept
Cognito authentication.
On the master account, use AWS Organizations to create a new organization with all
features turned on. Enable the organization’s external authentication and point it to use
the company’s directory service.
Correct selection
On the master account, use AWS Organizations to create a new organization with all
features turned on. Invite the child accounts to this new organization.
Overall explanation
AWS Organizations is an account management service that enables you to consolidate
multiple AWS accounts into an organization that you create and centrally manage. AWS
Organizations includes account management and consolidated billing capabilities that enable
you to better meet the budgetary, security, and compliance needs of your business. As an
administrator of an organization, you can create accounts in your organization and invite
existing accounts to join the organization.
AWS IAM Identity Center (successor to AWS Single Sign-On) provides single sign-on
access for all of your AWS accounts and cloud applications. It connects with Microsoft Active
Directory through AWS Directory Service to allow users in that directory to sign in to a
personalized AWS access portal using their existing Active Directory user names and
passwords. From the AWS access portal, users have access to all the AWS accounts and cloud
applications that they have permission for.
Users in your self-managed directory in Active Directory (AD) can also have single sign-on
access to AWS accounts and cloud applications in the AWS access portal.

Therefore, the correct answers are:


-On the master account, use AWS Organizations to create a new organization with all
features turned on. Invite the child accounts to this new organization.
-Configure AWS IAM Identity Center (AWS Single Sign-On) for the organization and
integrate it with the company’s directory service using the Active Directory Connector
The option that says: On the master account, use AWS Organizations to create a new
organization with all features turned on. Enable the organization’s external
authentication and point it to use the company’s directory service is incorrect. There is no
option to use an external authentication on AWS Organizations. You will need to configure AWS
SSO if you want to use an existing Directory Service.
The option that says: Create an identity pool on Amazon Cognito and configure it to use
the company’s directory service. Configure AWS IAM Identity Center (AWS Single Sign-
On) to accept Cognito authentication is incorrect. Amazon Cognito is used for single sign-on
in mobile and web applications. You don't have to use it if you already have an existing
Directory Service to be used for authentication.
The option that says: Create Service Control Policies (SCP) in the organization to manage
the child accounts. Configure AWS IAM Identity Center (AWS Single Sign-On) to use
AWS Directory Service is incorrect. SCPs are not necessarily needed for logging in on this
scenario. You can use SCP if you want to restrict or implement a policy across several accounts
in the organization.
References:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html
https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html
https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-
sso.html
Check out AWS Organizations Cheat Sheets:
https://tutorialsdojo.com/aws-organizations/
Domain
Design Secure Architectures
Question 38Incorrect
A company has a dynamic web app written in MEAN stack that is going to be launched in the
next month. There is a probability that the traffic will be quite high in the first couple of weeks. In
the event of a load failure, how can you set up DNS failover to a static website?
Your answer is incorrect
Duplicate the exact application architecture in another region and configure DNS weight-
based routing.
Correct answer
Use Route 53 with the failover option to a static S3 website bucket or CloudFront
distribution.
Enable failover to an application hosted in an on-premises data center.
Add more servers in case the application fails.
Overall explanation
For this scenario, using Route 53 with the failover option to a static S3 website bucket or
CloudFront distribution is correct. You can create a new Route 53 with the failover option to a
static S3 website bucket or CloudFront distribution as an alternative.
Duplicating the exact application architecture in another region and configuring DNS
weight-based routing is incorrect because running a duplicate system is not a cost-effective
solution. Remember that you are trying to build a failover mechanism for your web app, not a
distributed setup.
Enabling failover to an application hosted in an on-premises data center is incorrect.
Although you can set up failover to your on-premises data center, you are not maximizing the
AWS environment such as using Route 53 failover.
Adding more servers in case the application fails is incorrect because this is not the best
way to handle a failover event. If you add more servers only in case the application fails, then
there would be a period of downtime in which your application is unavailable. Since there are no
running servers on that period, your application will be unavailable for a certain period of time
until your new server is up and running.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/fail-over-s3-r53/
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
Check out this Amazon Route 53 Cheat Sheet:
https://tutorialsdojo.com/amazon-route-53/
Domain
Design Resilient Architectures
Question 41Incorrect
A media company hosts large volumes of archive data that are about 250 TB in size on its
internal servers. The company has decided to move this data to Amazon S3 because of its
durability and redundancy. The company currently has a 100 Mbps dedicated line connecting its
head office to the Internet.
Which of the following is the FASTEST and the MOST cost-effective way to import all this data
to S3?
Your answer is incorrect
Use S3 Transfer Acceleration to speed up the upload.
Correct answer
Order multiple AWS Snowball devices to upload the files to S3.
Establish an AWS Direct Connect connection then transfer the data over to S3.
Upload it directly to S3
Overall explanation
AWS Snowball is a petabyte-scale data transport solution that uses secure appliances to
transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses
common challenges with large-scale data transfers, including high network costs, long transfer
times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can
be as little as one-fifth the cost of high-speed Internet.

Snowball is a strong choice for data transfer if you need to more securely and quickly transfer
terabytes to many petabytes of data to AWS. Snowball can also be the right choice if you don’t
want to make expensive upgrades to your network infrastructure, if you frequently experience
large backlogs of data, if you're located in a physically isolated environment, or if you're in an
area where high-speed Internet connections are not available or cost-prohibitive.
As a rule of thumb, if it takes more than one week to upload your data to AWS using the spare
capacity of your existing Internet connection, then you should consider using Snowball. For
example, if you have a 100 Mb connection that you can solely dedicate to transferring your data
and need to transfer 100 TB of data, it takes more than 100 days to complete data transfer over
that connection. You can make the same transfer by using multiple Snowballs in about a week.

Hence, the correct answer is: Order multiple AWS Snowball devices to upload the files to
S3.
The option that says: Upload it directly to S3 is incorrect because it would typically take too
long to finish due to the slow Internet connection of the company.
The option that says: Establish an AWS Direct Connect connection then transfer the data
over to S3 is incorrect because provisioning a line for Direct Connect would take too much time
and might not only give you the fastest data transfer solution. In addition, the scenario didn't
warrant an establishment of a dedicated connection from your on-premises data center to AWS.
The primary goal is to just do a one-time migration of data to AWS which can be accomplished
by using AWS Snowball devices.
The option that says: Use S3 Transfer Acceleration to speed up the upload is incorrect
because Transfer Acceleration primarily improves upload performance by utilizing AWS edge
locations, but it does not significantly boost speeds for large datasets on a limited bandwidth
connection. Transferring 250 TB over a 100 Mbps internet connection would be extremely slow.
References:
https://aws.amazon.com/snowball/
https://aws.amazon.com/snowball/faqs/
Comparison of AWS Services Cheat Sheets:
https://tutorialsdojo.com/comparison-of-aws-services/
Domain
Design Cost-Optimized Architectures
Question 42Incorrect
A company wants to streamline the process of creating multiple AWS accounts within an AWS
Organization. Each organization unit (OU) must be able to launch new accounts with
preapproved configurations from the security team which will standardize the baselines and
network configurations for all accounts in the organization.
Which solution entails the least amount of effort to implement?
Your answer is incorrect
Set up an AWS Config aggregator on the root account of the organization to enable
multi-account, multi-region data aggregation. Deploy conformance packs to standardize
the baselines and network configurations for all accounts.
Configure AWS Resource Access Manager (AWS RAM) to launch new AWS accounts as
well as standardize the baselines and network configurations for each organization unit
Correct answer
Set up an AWS Control Tower Landing Zone. Enable pre-packaged guardrails to enforce
policies or detect violations.
Centralized the creation of AWS accounts using AWS Systems Manager OpsCenter.
Enforce policies and detect violations to all AWS accounts using AWS Security Hub.
Overall explanation
AWS Control Tower provides a single location to easily set up your new well-architected multi-
account environment and govern your AWS workloads with rules for security, operations, and
internal compliance. You can automate the setup of your AWS environment with best-practices
blueprints for multi-account structure, identity, access management, and account provisioning
workflow. For ongoing governance, you can select and apply pre-packaged policies enterprise-
wide or to specific groups of accounts.
AWS Control Tower provides three methods for creating member accounts:
- Through the Account Factory console that is part of AWS Service Catalog.
- Through the Enroll account feature within AWS Control Tower.
- From your AWS Control Tower landing zone's management account, using Lambda code and
appropriate IAM roles.
AWS Control Tower offers "guardrails" for ongoing governance of your AWS environment.
Guardrails provide governance controls by preventing the deployment of resources that don’t
conform to selected policies or detecting non-conformance of provisioned resources. AWS
Control Tower automatically implements guardrails using multiple building blocks such as AWS
CloudFormation to establish a baseline, AWS Organizations service control policies (SCPs) to
prevent configuration changes, and AWS Config rules to continuously detect non-conformance.
In this scenario, the requirement is to simplify the creation of AWS accounts that have
governance guardrails and a defined baseline in place. To save time and resources, you can
use AWS Control Tower to automate account creation. With the appropriate user group
permissions, you can specify standardized baselines and network configurations for all accounts
in the organization.
Hence, the correct answer is: Set up an AWS Control Tower Landing Zone. Enable pre-
packaged guardrails to enforce policies or detect violations.
The option that says: Configure AWS Resource Access Manager (AWS RAM) to launch
new AWS accounts as well as standardize the baselines and network configurations for
each organization unit is incorrect. The AWS Resource Access Manager (RAM) service
simply helps you to securely share your resources across AWS accounts or within your
organization or organizational units (OUs) in AWS Organizations. It is not capable of launching
new AWS accounts with preapproved configurations.
The option that says: Set up an AWS Config aggregator on the root account of the
organization to enable multi-account, multi-region data aggregation. Deploy
conformance packs to standardize the baselines and network configurations for all
accounts is incorrect. AWS Config cannot provision accounts. A conformance pack is only a
collection of AWS Config rules and remediation actions that can be easily deployed as a single
entity in an account and a Region or across an organization in AWS Organizations.
The option that says: Centralized the creation of AWS accounts using AWS Systems
Manager OpsCenter. Enforce policies and detect violations to all AWS accounts using
AWS Security Hub is incorrect. AWS Systems Manager is just a collection of services used to
manage applications and infrastructure running in AWS that is usually in a single AWS account.
The AWS Systems Manager OpsCenter service is just one of the capabilities of AWS Systems
Manager, provides a central location where operations engineers and IT professionals can view,
investigate, and resolve operational work items (OpsItems) related to AWS resources.
References:
https://docs.aws.amazon.com/controltower/latest/userguide/account-factory.html
https://aws.amazon.com/blogs/mt/how-to-automate-the-creation-of-multiple-accounts-in-aws-
control-tower/
https://aws.amazon.com/blogs/aws/aws-control-tower-set-up-govern-a-multi-account-aws-
environment/
Domain
Design Secure Architectures
Question 46Incorrect
A company wants to organize the way it tracks its spending on AWS resources. A report that
summarizes the total billing accrued by each department must be generated at the end of the
month.
Which solution will meet the requirements?
Correct answer
Tag resources with the department name and enable cost allocation tags.
Tag resources with the department name and configure a budget action in AWS Budget.
Create a Cost and Usage report for AWS services that each department is using.
Your answer is incorrect
Use AWS Cost Explorer to view spending and filter usage data by Resource.
Overall explanation
A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a
value. For each resource, each tag key must be unique, and each tag key can have only one
value. You can use tags to organize your resources and cost allocation tags to track your AWS
costs on a detailed level.

After you or AWS applies tags to your AWS resources (such as Amazon EC2 instances or
Amazon S3 buckets) and you activate the tags in the Billing and Cost Management console,
AWS generates a cost allocation report as a comma-separated value (CSV file) with your usage
and costs grouped by your active tags. You can apply tags that represent business categories
(such as cost centers, application names, or owners) to organize your costs across multiple
services.
Hence, the correct answer is: Tag resources with the department name and enable cost
allocation tags.
The option that says: Tag resources with the department name and configure a budget
action in AWS Budget is incorrect. AWS Budgets only allows you to be alerted and run custom
actions if your budget thresholds are exceeded.
The option that says: Use AWS Cost Explorer to view spending and filter usage data by
Resource is incorrect. The Resource filter just lets you track costs on EC2 instances. This is
quite limited compared with using the Cost Allocation Tags method.
The option that says: Create a Cost and Usage report for AWS services that each
department is using is incorrect. The report must contain a breakdown of costs incurred by
each department based on tags and not based on AWS services, which is what the Cost and
Usage Report (CUR) contains.
References:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
https://aws.amazon.com/blogs/aws-cloud-financial-management/cost-allocation-blog-series-2-
aws-generated-vs-user-defined-cost-allocation-tag/
Check out this AWS Billing and Cost Management Cheat sheet:
https://tutorialsdojo.com/aws-billing-and-cost-management/
Domain
Design Cost-Optimized Architectures
Question 48Incorrect
A Solutions Architect created a new Standard-class Amazon S3 bucket to store financial reports
that are not frequently accessed but should immediately be available when an auditor requests
the reports. To save costs, the Architect changed the storage class of the S3 bucket from
Standard to Infrequent Access storage class.
In S3 Standard - Infrequent Access storage class, which of the following statements are true?
(Select TWO.)
It automatically moves data to the most cost-effective access tier without any operational
overhead.
Your selection is correct
It is designed for data that is accessed less frequently.
Your selection is incorrect
It provides high latency and low throughput performance
Correct selection
It is designed for data that requires rapid access when needed.
Ideal to use for data archiving.
Overall explanation
Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for
data that is accessed less frequently, but requires rapid access when needed. Standard - IA
offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per
GB storage price and per GB retrieval fee.
This combination of low cost and high performance make Standard - IA ideal for long-term
storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is
set at the object level and can exist in the same bucket as Standard, allowing you to use
lifecycle policies to automatically transition objects between storage classes without any
application changes.
Key Features:
- Same low latency and high throughput performance of Standard
- Designed for durability of 99.999999999% of objects
- Designed for 99.9% availability over a given year
- Backed with the Amazon S3 Service Level Agreement for availability
- Supports SSL encryption of data in transit and at rest
- Lifecycle management for automatic migration of objects
Hence, the correct answers are:
- It is designed for data that is accessed less frequently.
- It is designed for data that requires rapid access when needed.
The option that says: It automatically moves data to the most cost-effective access tier
without any operational overhead is incorrect because it actually refers to Amazon S3 -
Intelligent Tiering, which is the only cloud storage class that delivers automatic cost savings by
moving objects between different access tiers when access patterns change.
The option that says: It provides high latency and low throughput performance is incorrect
because it should just be "low latency" and "high throughput" instead. S3 automatically scales
performance to meet user demands.
The option that says: Ideal to use for data archiving is incorrect because this statement refers
to Amazon S3 Glacier. Glacier is a secure, durable, and extremely low-cost cloud storage
service for data archiving and long-term backup.
References:
https://aws.amazon.com/s3/storage-classes/
https://aws.amazon.com/s3/faqs
Check out this Amazon S3 Cheat Sheet:
https://tutorialsdojo.com/amazon-s3/
Domain
Design High-Performing Architectures
Question 53Incorrect
A company plans to migrate its suite of containerized applications running on-premises to a
container service in AWS. The solution must be cloud-agnostic and use an open-source
platform that can automatically manage containerized workloads and services. It should also
use the same configuration and tools across various production environments.
What should the Solution Architect do to properly migrate and satisfy the given requirement?
Migrate the application to Amazon Elastic Container Service with ECS tasks that use the
Amazon EC2 launch type.
Your answer is incorrect
Migrate the application to Amazon Elastic Container Service with ECS tasks that use the
AWS Fargate launch type.
Migrate the application to Amazon Container Registry (ECR) with Amazon EC2 instance
worker nodes.
Correct answer
Migrate the application to Amazon Elastic Kubernetes Service with EKS worker nodes.
Overall explanation
Amazon EKS provisions and scales the Kubernetes control plane, including the API servers
and backend persistence layer, across multiple AWS availability zones for high availability and
fault tolerance. Amazon EKS automatically detects and replaces unhealthy control plane nodes
and provides patching for the control plane. Amazon EKS is integrated with many AWS services
to provide scalability and security for your applications. These services include Elastic Load
Balancing for load distribution, IAM for authentication, Amazon VPC for isolation, and AWS
CloudTrail for logging.
To migrate the application to a container service, you can use Amazon ECS or Amazon EKS.
But the key point in this scenario is a cloud-agnostic and open-source platforms. Take note that
Amazon ECS is an AWS proprietary container service. This means that it is not an open-source
platform. Amazon EKS is a portable, extensible, and open-source platform for managing
containerized workloads and services. Kubernetes is considered cloud-agnostic because it
allows you to move your containers to other cloud service providers.
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use
all of the existing plugins and tools from the Kubernetes community. Applications running on
Amazon EKS are fully compatible with applications running on any standard Kubernetes
environment, whether running in on-premises data centers or public clouds. This means that
you can easily migrate any standard Kubernetes application to Amazon EKS without any code
modification required.
Hence, the correct answer is: Migrate the application to Amazon Elastic Kubernetes
Service with EKS worker nodes.
The option that says: Migrate the application to Amazon Container Registry (ECR) with
Amazon EC2 instance worker nodes is incorrect because Amazon ECR is just a fully-
managed Docker container registry. Also, this option is not an open-source platform that can
manage containerized workloads and services.
The option that says: Migrate the application to Amazon Elastic Container Service with
ECS tasks that use the AWS Fargate launch type is incorrect because it is stated in the
scenario that you have to migrate the application suite to an open-source platform. AWS
Fargate is just a serverless compute engine for containers. It is not cloud-agnostic since you
cannot use the same configuration and tools if you move it to another cloud service provider
such as Microsoft Azure or Google Cloud Platform (GCP).
The option that says: Migrate the application to Amazon Elastic Container Service with
ECS tasks that use the Amazon EC2 launch type is incorrect because Amazon ECS is an
AWS proprietary managed container orchestration service. You should use Amazon EKS since
Kubernetes is an open-source platform and is considered cloud-agnostic. With Kubernetes, you
can use the same configuration and tools that you're currently using in AWS even if you move
your containers to another cloud service provider.
References:
https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html
https://aws.amazon.com/eks/faqs/
Check out our library of AWS Cheat Sheets:
https://tutorialsdojo.com/links-to-all-aws-cheat-sheets/
Domain
Design High-Performing Architectures
Question 55Incorrect
A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each
other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The
company uses a single AWS Direct Connect connection and a virtual interface to connect their
on-premises network with VPC-1.

Which of the following options increase the fault tolerance of the connection to VPC-1? (Select
TWO.)
Your selection is incorrect
Establish a new AWS Direct Connect connection and private virtual interface in the same
region as VPC-2.
Correct selection
Establish another AWS Direct Connect connection and private virtual interface in the
same AWS region as VPC-1.
Establish a hardware VPN over the Internet between VPC-2 and the on-premises network.
Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and private
virtual interface in the same region as VPC-2.
Your selection is correct
Establish a hardware VPN over the Internet between VPC-1 and the on-premises network.
Overall explanation
In this scenario, you have two VPCs which have peering connections with each other. Note that
a VPC peering connection does not support edge to edge routing. This means that if either VPC
in a peering relationship has one of the following connections, you cannot extend the peering
relationship to that connection:
- A VPN connection or an AWS Direct Connect connection to a corporate network
- An Internet connection through an Internet gateway
- An Internet connection in a private subnet through a NAT device
- A gateway VPC endpoint to an AWS service; for example, an endpoint to Amazon S3.
- (IPv6) A ClassicLink connection. You can enable IPv4 communication between a linked EC2-
Classic instance and instances in a VPC on the other side of a VPC peering connection.
However, IPv6 is not supported in EC2-Classic, so you cannot extend this connection for IPv6
communication.

For example, if VPC A and


VPC B are peered, and VPC A has any of these connections, then instances in VPC B cannot
use the connection to access resources on the other side of the connection. Similarly, resources
on the other side of a connection cannot use the connection to access VPC B.
Hence, this means that you cannot use VPC-2 to extend the peering relationship that exists
between VPC-1 and the on-premises network. For example, traffic from the corporate network
can't directly access VPC-1 by using the VPN connection or the AWS Direct Connect
connection to VPC-2, which is why the following options are incorrect:
- Use the AWS VPN CloudHub to create a new AWS Direct Connect connection and
private virtual interface in the same region as VPC-2.
- Establish a hardware VPN over the Internet between VPC-2 and the on-premises
network.
- Establish a new AWS Direct Connect connection and private virtual interface in the
same region as VPC-2.
You can do the following to provide a highly available, fault-tolerant network connection:
- Establish a hardware VPN over the Internet between the VPC and the on-premises
network.
- Establish another AWS Direct Connect connection and private virtual interface in the
same AWS region.
References:
https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html#edge-to-
edge-vgw
https://aws.amazon.com/premiumsupport/knowledge-center/configure-vpn-backup-dx/
https://aws.amazon.com/answers/networking/aws-multiple-data-center-ha-network-connectivity/
Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/
Domain
Design Secure Architectures
Question 56Incorrect
A company is running a custom application in an Auto Scaling group of Amazon EC2 instances.
Several instances are failing due to insufficient swap space. The Solutions Architect has been
instructed to troubleshoot the issue and effectively monitor the available swap space of each
EC2 instance.
Which of the following options fulfills this requirement?
Create a new trail in AWS CloudTrail and configure Amazon CloudWatch Logs to monitor
your trail logs.
Correct answer
Install the CloudWatch agent on each instance and monitor the SwapUtilization
metric.
Your answer is incorrect
Enable detailed monitoring on each instance and monitor the SwapUtilization metric.
Create a CloudWatch dashboard and monitor the SwapUsed metric.
Overall explanation
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications
you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and
monitor log files, and set alarms. Amazon CloudWatch can monitor AWS resources such as
Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well
as custom metrics generated by your applications and services and any log files your
applications generate. You can use Amazon CloudWatch to gain system-wide visibility into
resource utilization, application performance, and operational health.
The main requirement in the scenario is to monitor the SwapUtilization metric. Take note
that you can't use the default metrics of CloudWatch to monitor the SwapUtilization metric.
To monitor custom metrics, you must install the CloudWatch agent on the EC2 instance. After
installing the CloudWatch agent, you can now collect system metrics and log files of an EC2
instance.
Hence, the correct answer is: Install the CloudWatch agent on each instance and monitor
the SwapUtilization metric.
The option that says: Enable detailed monitoring on each instance and monitor the
SwapUtilization metric is incorrect because you can't monitor the SwapUtilization
metric by just enabling the detailed monitoring option. You must install the CloudWatch agent on
the instance.
The option that says: Create a CloudWatch dashboard and monitor the SwapUsed metric is
incorrect because you must install the CloudWatch agent first to add the custom metric in the
dashboard.
The option that says: Create a new trail in AWS CloudTrail and configure Amazon
CloudWatch Logs to monitor your trail logs is incorrect because CloudTrail won't help you
monitor custom metrics. CloudTrail is specifically used for monitoring API activities in an AWS
account.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
https://aws.amazon.com/cloudwatch/faqs/
Check out this Amazon CloudWatch Cheat Sheet:
https://tutorialsdojo.com/amazon-cloudwatch/
Domain
Design Resilient Architectures
Question 57Incorrect
A commercial bank has a forex trading application. They created an Auto Scaling group of EC2
instances that allow the bank to cope with the current traffic and achieve cost-efficiency. They
want the Auto Scaling group to behave in such a way that it will follow a predefined set of
parameters before it scales down the number of EC2 instances, which protects the system from
unintended slowdown or unavailability.
Which of the following statements are true regarding the cooldown period? (Select TWO.)
It ensures that the Auto Scaling group launches or terminates additional EC2 instances
without any downtime.
Correct selection
Its default value is 300 seconds.
Your selection is incorrect
It ensures that before the Auto Scaling group scales out, the EC2 instances have ample
time to cooldown.
Your selection is correct
It ensures that the Auto Scaling group does not launch or terminate additional EC2
instances before the previous scaling activity takes effect.
Its default value is 600 seconds.
Overall explanation
In Auto Scaling, the following statements are correct regarding the cooldown period:
It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances
before the previous scaling activity takes effect.
Its default value is 300 seconds.
It is a configurable setting for your Auto Scaling group.
The following options are incorrect:
- It ensures that before the Auto Scaling group scales out, the EC2 instances have ample
time to cooldown.
- It ensures that the Auto Scaling group launches or terminates additional EC2 instances
without any downtime.
- Its default value is 600 seconds.
These statements are inaccurate and don't depict what the word "cooldown" actually means for
Auto Scaling. The cooldown period is a configurable setting for your Auto Scaling group that
helps to ensure that it doesn't launch or terminate additional instances before the previous
scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple
scaling policy, it waits for the cooldown period to complete before resuming scaling activities.
The figure below demonstrates the scaling cooldown:
Reference:
http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html
Check out this AWS Auto Scaling Cheat Sheet:
https://tutorialsdojo.com/aws-auto-scaling/
Tutorials Dojo's AWS Certified Solutions Architect Associate Exam Study Guide:
https://tutorialsdojo.com/aws-certified-solutions-architect-associate/
Domain
Design Resilient Architectures
Question 58Incorrect
An online events registration system is hosted in AWS and uses ECS to host its front-end tier
and an RDS configured with Multi-AZ for its database tier. What are the events that will make
Amazon RDS automatically perform a failover to the standby replica? (Select TWO.)
Your selection is incorrect
Storage failure on secondary DB instance
Correct selection
Storage failure on primary
Your selection is incorrect
In the event of Read Replica failure
Correct selection
Loss of availability in primary Availability Zone
Compute unit failure on secondary DB instance
Overall explanation
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ
deployments. Amazon RDS uses several different technologies to provide failover support.
Multi-AZ deployments for Oracle, PostgreSQL, MySQL, and MariaDB DB instances use
Amazon's failover technology. SQL Server DB instances use SQL Server Database Mirroring
(DBM).
In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous
standby replica in a different Availability Zone. The primary DB instance is synchronously
replicated across Availability Zones to a standby replica to provide data redundancy, eliminate
I/O freezes, and minimize latency spikes during system backups. Running a DB instance with
high availability can enhance availability during planned system maintenance and help protect
your databases against DB instance failure and Availability Zone disruption.
Amazon RDS detects and automatically recovers from the most common failure scenarios for
Multi-AZ deployments so that you can resume database operations as quickly as possible
without administrative intervention.
The high-availability feature is not a scaling solution for read-only scenarios; you cannot use a
standby replica to serve read traffic. To service read-only traffic, you should use a Read
Replica.
Amazon RDS automatically performs a failover in the event of any of the following:
1. Loss of availability in primary Availability Zone.
2. Loss of network connectivity to primary.
3. Compute unit failure on primary.
4. Storage failure on primary.
Hence, the correct answers are:
- Loss of availability in primary Availability Zone
- Storage failure on primary
The following options are incorrect because all these scenarios do not affect the primary
database. Automatic failover only occurs if the primary database is the one that is affected.
- Storage failure on secondary DB instance
- In the event of Read Replica failure
- Compute unit failure on secondary DB instance
References:
https://aws.amazon.com/rds/details/multi-az/
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Check out this Amazon RDS Cheat Sheet:
https://tutorialsdojo.com/amazon-relational-database-service-amazon-rds/
Domain
Design Secure Architectures
Question 60Incorrect
A company has developed public APIs hosted in Amazon EC2 instances behind an Elastic Load
Balancer. The APIs will be used by various clients from their respective on-premises data
centers. A Solutions Architect received a report that the web service clients can only access
trusted IP addresses whitelisted on their firewalls.
What should you do to accomplish the above requirement?
Correct answer
Associate an Elastic IP address to a Network Load Balancer.
Create a CloudFront distribution whose origin points to the private IP addresses of your
web servers.
Your answer is incorrect
Associate an Elastic IP address to an Application Load Balancer.
Create an Alias Record in Route 53 which maps to the DNS name of the load balancer.
Overall explanation
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection
(OSI) model. It can handle millions of requests per second. After the load balancer receives a
connection request, it selects a target from the default rule's target group. It attempts to open a
TCP connection to the selected target on the port specified in the listener configuration.
Based on the given scenario, web service clients can only access trusted IP addresses. To
resolve this requirement, you can use the Bring Your Own IP (BYOIP) feature to use the trusted
IPs as Elastic IP addresses (EIP) to a Network Load Balancer (NLB). This way, there's no need
to re-establish the whitelists with new IP addresses.
Hence, the correct answer is: Associate an Elastic IP address to a Network Load Balancer.
The option that says: Associate an Elastic IP address to an Application Load Balancer is
incorrect because you can't assign an Elastic IP address to an Application Load Balancer. The
alternative method you can do is assign an Elastic IP address to a Network Load Balancer in
front of the Application Load Balancer.
The option that says: Create a CloudFront distribution whose origin points to the private IP
addresses of your web servers is incorrect because web service clients can only access
trusted IP addresses. The fastest way to resolve this requirement is to attach an Elastic IP
address to a Network Load Balancer.
The option that says: Create an Alias Record in Route 53 which maps to the DNS name of
the load balancer is incorrect. This approach won't still allow them to access the application
because of trusted IP addresses on their firewalls.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/elb-attach-elastic-ip-to-public-nlb/
https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-
application-load-balancers/
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Check out this AWS Elastic Load Balancing Cheat Sheet:
https://tutorialsdojo.com/aws-elastic-load-balancing-elb/
Domain
Design High-Performing Architectures
Question 62Incorrect
An organization needs to control access to several Amazon S3 buckets. The organization plans
to use a gateway endpoint to allow access to trusted buckets.
Which of the following could help you achieve this requirement?
Correct answer
Generate an endpoint policy for trusted S3 buckets.
Generate an endpoint policy for trusted VPCs.
Generate a bucket policy for trusted VPCs.
Your answer is incorrect
Generate a bucket policy for trusted S3 buckets.
Overall explanation
A Gateway endpoint is a type of VPC endpoint that provides reliable connectivity to Amazon S3
and DynamoDB without requiring an internet gateway or a NAT device for your VPC. Instances
in your VPC do not require public IP addresses to communicate with resources in the service.
When you create a Gateway endpoint, you can attach an endpoint policy that controls access to
the service to which you are connecting. You can modify the endpoint policy attached to your
endpoint and add or remove the route tables used by the endpoint. An endpoint policy does not
override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It
is a separate policy for controlling access from the endpoint to the specified service.

We can use a bucket policy or an endpoint policy to allow the traffic to trusted S3 buckets. The
options that have 'trusted S3 buckets' key phrases will be the possible answer in this scenario. It
would take you a lot of time to configure a bucket policy for each S3 bucket instead of using a
single endpoint policy. Therefore, you should use an endpoint policy to control the traffic to the
trusted Amazon S3 buckets.
Hence, the correct answer is: Generate an endpoint policy for trusted S3 buckets.
The option that says: Generate a bucket policy for trusted S3 buckets is incorrect. Although
this is a valid solution, it takes a lot of time to set up a bucket policy for each and every S3
bucket. This can be simplified by whitelisting access to trusted S3 buckets in a single S3
endpoint policy.
The option that says: Generate a bucket policy for trusted VPCs is incorrect because you are
generating a policy for trusted VPCs. Remember that the scenario only requires you to allow the
traffic for trusted S3 buckets, not to the VPCs.
The option that says: Generate an endpoint policy for trusted VPCs is incorrect because it
only allows access to trusted VPCs, and not to trusted Amazon S3 buckets.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html
https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/amazon-vpc/
Domain
Design Secure Architectures
Question 65Incorrect
A solutions architect is instructed to host a website that consists of HTML, CSS, and some
Javascript files. The web pages will display several high-resolution images. The website should
have optimal loading times and be able to respond to high request rates.
Which of the following architectures can provide the most cost-effective and fastest loading
experience?
Host the website using an Nginx server in an EC2 instance. Upload the images in an S3
bucket. Use CloudFront as a CDN to deliver the images closer to end-users.
Your answer is incorrect
Host the website in an AWS Elastic Beanstalk environment. Upload the images in an S3
bucket. Use CloudFront as a CDN to deliver the images closer to your end-users.
Launch an Auto Scaling Group using an AMI that has a pre-configured Apache web
server, then configure the scaling policy accordingly. Store the images in an Elastic
Block Store. Then, point your instance’s endpoint to AWS Global Accelerator.
Correct answer
Upload the HTML, CSS, Javascript, and the images in a single bucket. Then enable
website hosting. Create a CloudFront distribution and point the domain on the S3
website endpoint.
Overall explanation
Amazon S3 is an object storage service that offers industry-leading scalability, data availability,
security, and performance. Additionally, You can use Amazon S3 to host a static website. On a
static website, individual webpages include static content. Amazon S3 is highly scalable and
you only pay for what you use, you can start small and grow your application as you wish,
with no compromise on performance or reliability.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds. CloudFront can be integrated with Amazon S3 for fast delivery of data originating from
an S3 bucket to your end-users. By design, delivering data out of CloudFront can be more cost-
effective than delivering it from S3 directly to your users.

In the scenario, Since we are only dealing with static content, we can leverage the web hosting
feature of S3. Then we can improve the architecture further by integrating it with CloudFront.
This way, users will be able to load both the web pages and images faster than if we hosted
them on a webserver that we built from scratch.
Hence, the correct answer is: Upload the HTML, CSS, Javascript, and the images in a
single bucket. Then enable website hosting. Create a CloudFront distribution and point
the domain on the S3 website endpoint.
The option that says: Host the website using an Nginx server in an EC2 instance. Upload
the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to
end-users is incorrect. Creating your own web server to host a static website in AWS is a costly
solution. Web Servers on an EC2 instance are usually used for hosting applications that require
server-side processing (connecting to a database, data validation, etc.). Since static websites
contain web pages with fixed content, we should use S3 website hosting instead.
The option that says: Launch an Auto Scaling Group using an AMI that has a pre-
configured Apache web server, then configure the scaling policy accordingly. Store the
images in an Elastic Block Store. Then, point your instance’s endpoint to AWS Global
Accelerator is incorrect. This is how we serve static websites in the old days. Now, with the
help of S3 website hosting, we can host our static contents from a durable, high-availability, and
highly scalable environment without managing any servers. Hosting static websites in S3 is
cheaper than hosting it in an EC2 instance. In addition, Using ASG for scaling instances that
host a static website is an over-engineered solution that carries unnecessary costs. S3
automatically scales to high requests and you only pay for what you use.
The option that says: Host the website in an AWS Elastic Beanstalk environment. Upload
the images in an S3 bucket. Use CloudFront as a CDN to deliver the images closer to
your end-users is incorrect. AWS Elastic Beanstalk simply sets up the infrastructure (EC2
instance, load balancer, auto-scaling group) for your application. It's a more expensive and a bit
of an overkill solution for hosting a bunch of client-side files.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-
a-match-made-in-the-cloud/
Check out these Amazon S3 and CloudFront Cheat Sheets:
https://tutorialsdojo.com/amazon-s3/
https://tutorialsdojo.com/amazon-cloudfront/
Domain
Design Cost-Optimized Architectures

You might also like