SAP-C02 Dumps
SAP-C02 Dumps
Get the Full SAP-C02 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/SAP-C02-exam-dumps.html (412 New Questions)
Amazon-Web-Services
Exam Questions SAP-C02
AWS Certified Solutions Architect - Professional
NEW QUESTION 1
- (Exam Topic 1)
A startup company recently migrated a large ecommerce website to AWS. The website has experienced a 70% increase in sales. Software engineers are using a
private GitHub repository to manage code. The DevOps learn is using Jenkins for builds and unit testing. The engineers need to receive notifications for bad builds
and zero downtime during deployments. The engineers also need to ensure any changes to production are seamless for users and can be rolled back in the event
of a major issue.
The software engineers have decided to use AWS CodePipeline to manage their build and deployment process.
Which solution will meet these requirements?
Answer: B
NEW QUESTION 2
- (Exam Topic 1)
A company is running an application distributed over several Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer The security
team requires that all application access attempts be made available for analysis Information about the client IP address, connection type, and user agent must be
included.
Which solution will meet these requirements?
A. Enable EC2 detailed monitoring, and include network logs Send all logs through Amazon Kinesis Data Firehose to an Amazon ElasDcsearch Service (Amazon
ES) cluster that the security team uses for analysis.
B. Enable VPC Flow Logs for all EC2 instance network interfaces Publish VPC Flow Logs to an Amazon S3 bucket Have the security team use Amazon Athena to
query and analyze the logs
C. Enable access logs for the Application Load Balancer, and publish the logs to an Amazon S3 bucket Have the security team use Amazon Athena to query and
analyze the logs
D. Enable Traffic Mirroring and specify all EC2 instance network interfaces as the sourc
E. Send all traffic information through Amazon Kinesis Data Firehose to an Amazon Elastic search Service (Amazon ES) cluster that the security team uses for
analysis.
Answer: C
Explanation:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html
NEW QUESTION 3
- (Exam Topic 1)
A company has a complex web application that leverages Amazon CloudFront for global scalability and performance. Over time, users report that the web
application is slowing down.
The company's operations team reports that the CloudFront cache hit ratio has been dropping steadily. The cache metrics report indicates that query strings on
some URLs are inconsistently ordered and are specified sometimes in mixed-case letters and sometimes in lowercase letters.
Which set of actions should the solutions architect take to increase the cache hit ratio as quickly as possible?
A. Deploy a Lambda@Edge function to sort parameters by name and force them to be lowercas
B. Select the CloudFront viewer request trigger to invoke the function.
C. Update the CloudFront distribution to disable caching based on query string parameters.
D. Deploy a reverse proxy after the load balancer to post-process the emitted URLs in the application to force the URL strings to be lowercase.
E. Update the CloudFront distribution to specify casing-insensitive query string processing.
Answer: A
Explanation:
https://docs.amazonaws.cn/en_us/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-ex Before CloudFront serves content from the cache
it will trigger any Lambda function associated with the Viewer Request, in which we can normalize parameters.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examp
NEW QUESTION 4
- (Exam Topic 1)
A company has an Amazon VPC that is divided into a public subnet and a pnvate subnet. A web application runs in Amazon VPC. and each subnet has its own
NACL The public subnet has a CIDR of 10.0.0 0/24 An Application Load Balancer is deployed to the public subnet The private subnet has a CIDR of 10.0.1.0/24.
Amazon EC2 instances that run a web server on port 80 are launched into the private subnet
Onty network traffic that is required for the Application Load Balancer to access the web application can be allowed to travel between the public and private
subnets
What collection of rules should be written to ensure that the private subnet's NACL meets the requirement? (Select TWO.)
Answer: BE
Explanation:
Ephemeral ports are not covered in the syllabus so be careful that you don't confuse day to day best practise with what is required for the exam. Link to an
explanation on Ephemeral ports here. https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-KUbcwo4lXefMl7janaK/netw
NEW QUESTION 5
- (Exam Topic 1)
A company is running a web application on Amazon EC2 instances in a production AWS account. The company requires all logs generated from the web
application to be copied to a central AWS account (or analysis and archiving. The company's AWS accounts are currently managed independently. Logging agents
are configured on the EC2 instances to upload the tog files to an Amazon S3 bucket in the central AWS account.
A solutions architect needs to provide access for a solution that will allow the production account to store log files in the central account. The central account also
needs to have read access to the tog files.
What should the solutions architect do to meet these requirements?
Answer: B
NEW QUESTION 6
- (Exam Topic 1)
A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also
runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job
runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling
group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed
data for the duration of the 72-hour run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?
A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage clas
B. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loadin
C. Use the new file system as the shared storage for the duration of the jo
D. Delete the file system when the job is complete.
E. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enable
F. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch templat
G. Use the EBS volume as the shared storage for the duration of the jo
H. Detach the EBS volume when the job is complete.
I. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage clas
J. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loadin
K. Use the new file system as the shared storage for the duration of the jo
L. Delete the file system when the job is complete.
M. Migrate the data from the existing shared file system to an Amazon S3 bucke
N. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage
for the jo
O. Delete the file gateway when the job is complete.
Answer: B
NEW QUESTION 7
- (Exam Topic 1)
A company is developing and hosting several projects in the AWS Cloud. The projects are developed across multiple AWS accounts under the same organization
in AWS Organizations. The company requires the cost lor cloud infrastructure to be allocated to the owning project. The team responsible for all of the AWS
accounts has discovered that several Amazon EC2 instances are lacking the Project tag used for cost allocation.
Which actions should a solutions architect take to resolve the problem and prevent it from happening in the future? (Select THREE.)
A. Create an AWS Config rule in each account to find resources with missing tags.
B. Create an SCP in the organization with a deny action for ec2:Runlnstances if the Project tag is missing.
C. Use Amazon Inspector in the organization to find resources with missing tags.
D. Create an IAM policy in each account with a deny action for ec2:RunInstances if the Project tag is missing.
E. Create an AWS Config aggregator for the organization to collect a list of EC2 instances with the missing Project tag.
F. Use AWS Security Hub to aggregate a list of EC2 instances with the missing Project tag.
Answer: BDE
NEW QUESTION 8
- (Exam Topic 1)
A company built an ecommerce website on AWS using a three-tier web architecture. The application is
Java-based and composed of an Amazon CloudFront distribution, an Apache web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend
Amazon Aurora MySQL database.
Last month, during a promotional sales event, users reported errors and timeouts while adding items to their shopping carts. The operations team recovered the
logs created by the web servers and reviewed Aurora DB cluster performance metrics. Some of the web servers were terminated before logs could be collected
and the Aurora metrics were not sufficient for query performance analysis.
Which combination of steps must the solutions architect take to improve application performance visibility during peak traffic events? (Select THREE.)
A. Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon CloudWatch Logs.
B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances and implement tracing of SQL queries with the X-Ray SDK for Java.
C. Configure the Aurora MySQL DB cluster to stream slow query and error logs to Amazon Kinesis.
D. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send the Apache logsto CloudWatch Logs.
E. Enable and configure AWS CloudTrail to collect and analyze application activity from Amazon EC2 and Aurora.
F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream to AWS X-Ray.
Answer: ABD
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.MySQL.html# https://aws.amazon.com/blogs/mt/simplifying-
apache-server-logs-with-amazon-cloudwatch-logs-insights/ https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-dotnet-messagehandler.html
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-sqlclients.html
NEW QUESTION 9
- (Exam Topic 1)
To abide by industry regulations, a solutions architect must design a solution that will store a company's critical data in multiple public AWS Regions, including in
the United States, where the company's headquarters is located. The solutions architect is required to provide access to the data stored in AWS to the company's
global WAN network. The security team mandates that no traffic accessing this data should traverse the public internet.
How should the solutions architect design a highly available solution that meets the requirements and is cost-effective?
A. Establish AWS Direct Connect connections from the company headquarters to all AWS Regions in use.Use the company WAN lo send traffic over to the
headquarters and then to the respective DX connection to access the data.
B. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region.Use the company WAN to send traffic over a DX connectio
C. Use inter-region VPC peering to access the data in other AWS Regions.
D. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region.Use the company WAN to send traffic over a DX connectio
E. Use an AWS transit VPC solution to access data in other AWS Regions.
F. Establish two AWS Direct Connect connections from the company headquarters to an AWS Region.Use the company WAN to send traffic over a DX connectio
G. Use Direct Connect Gateway to access data in other AWS Regions.
Answer: D
Explanation:
This feature also allows you to connect to any of the participating VPCs from any Direct Connect location, further reducing your costs for making using AWS
services on a cross-region basis. https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-aws-transit-g
NEW QUESTION 10
- (Exam Topic 1)
A solutions architect is responsible (or redesigning a legacy Java application to improve its availability, data durability, and scalability. Currently, the application
runs on a single high-memory Amazon EC2 instance. It accepts HTTP requests from upstream clients, adds them to an in-memory queue, and responds with a
200 status. A separate application thread reads items from the queue, processes them, and persists the results to an Amazon RDS MySQL instance. The
processing time for each item takes 90 seconds on average, most of which is spent waiting on external service calls, but the application is written to process
multiple items in parallel.
Traffic to this service is unpredictable. During periods of high load, items may sit in the internal queue for over an hour while the application processes the backlog.
In addition, the current system has issues with availability and data loss if the single application node fails.
Clients that access this service cannot be modified. They expect to receive a response to each HTTP request they send within 10 seconds before they will time out
and retry the request.
Which approach would improve the availability and durability of (he system while decreasing the processing latency and minimizing costs?
A. Create an Amazon API Gateway REST API that uses Lambda proxy integration to pass requests to an AWS Lambda functio
B. Migrate the core processing code to a Lambda function and write a wrapper class that provides a handler method that converts the proxy events to the internal
application data model and invokes the processing module.
C. Create an Amazon API Gateway REST API that uses a service proxy to put items in an Amazon SOS queu
D. Extract the core processing code from the existing application and update it to pull items from Amazon SOS instead of an in-memory queu
E. Deploy the new processing application to smaller EC2 instances within an Auto Scaling group that scales dynamically based on the approximate number of
messages in the Amazon SOS queue.
F. Modify the application to use Amazon DynamoDB instead of Amazon RD
G. Configure Auto Scaling for the DynamoDB tabl
H. Deploy the application within an Auto Scaling group with a scaling policy based on CPU utilizatio
I. Back the in-memory queue with a memory-mapped file to an instance store volume and periodically write that file to Amazon S3.
J. Update the application to use a Redis task queue instead of the in-memory queu
K. 8uild a Docker container image for the applicatio
L. Create an Amazon ECS task definition that includes the application container and a separate container to host Redi
M. Deploy the new task definition as an ECS service using AWS Fargate, and enable Auto Scaling.
Answer: B
Explanation:
The obvious challenges here are long workloads, scalability based on queue load, and reliability. Almost always the defacto answer to queue related workload is
SQS. Since the workloads are very long (90 minutes) Lambdas cannot be used (15 mins max timeout). So, autoscaled smaller EC2 nodes that wait on external
services to complete the task makes more sense. If the task fails, the message is returned to the queue and retried.
NEW QUESTION 10
- (Exam Topic 1)
A team collects and routes behavioral data for an entire company. The company runs a Multi-AZ VPC environment with public subnets, private subnets, and in
internet gateway Each public subnet also contains a NAT gateway Most of the company's applications read from and write to Amazon Kinesis Data Streams. Most
of the workloads run in private subnets.
A solutions architect must review the infrastructure The solutions architect needs to reduce costs and maintain the function of the applications. The solutions
architect uses Cost Explorer and notices that the cost in the EC2-Other category is consistently high A further review shows that NatGateway-Bytes charges are
increasing the cost in the EC2-Other category.
What should the solutions architect do to meet these requirements?
Answer: D
Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html https://aws.amazon.com/premiumsupport/knowledge-center/vpc-reduce-nat-
gateway-transfer-costs/
VPC endpoint policies enable you to control access by either attaching a policy to a VPC endpoint or by using additional fields in a policy that is attached to an IAM
user, group, or role to restrict access to only occur via the specified VPC endpoint
NEW QUESTION 11
- (Exam Topic 1)
A large company is running a popular web application. The application runs on several Amazon EC2 Linux Instances in an Auto Scaling group in a private subnet.
An Application Load Balancer is targeting the Instances In the Auto Scaling group in the private subnet. AWS Systems Manager Session Manager Is configured,
and AWS Systems Manager Agent is running on all the EC2 instances.
The company recently released a new version of the application Some EC2 instances are now being marked as unhealthy and are being terminated As a result,
the application is running at reduced capacity A solutions architect tries to determine the root cause by analyzing Amazon CloudWatch logs that are collected from
the application, but the logs are inconclusive
How should the solutions architect gain access to an EC2 instance to troubleshoot the issue1?
Answer: D
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
it shows For Amazon EC2 Auto Scaling, there are two primary process types: Launch and Terminate. The Launch process adds a new Amazon EC2 instance to
an Auto Scaling group, increasing its capacity. The Terminate process removes an Amazon EC2 instance from the group, decreasing its capacity. HealthCheck
process for EC2 autoscaling is not a primary process! It is a process along with the following AddToLoadBalancer AlarmNotification AZRebalance HealthCheck
InstanceRefresh ReplaceUnhealthy ScheduledActions From the requirements, Some EC2 instances are now being marked as unhealthy and are being
terminated. Application is running at reduced capacity not because instances are marked unhealthy but because they are being terminated.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#choosing-suspend-r
NEW QUESTION 12
- (Exam Topic 1)
A company runs an application that gives users the ability to search for videos and related information by using keywords that are curated from content providers.
The application data is stored in an on-premises Oracle database that is 800 GB in size.
The company wants to migrate the data to an Amazon Aurora MySQL DB instance. A solutions architect
plans to use the AWS Schema Conversion Tool and AWS Database Migration Service (AWS DMS) for the migration. During the migration, the existing database
must serve ongoing requests. The migration must be completed with minimum downtime
Which solution will meet these requirements?
A. Create primary key indexes, secondary indexes, and referential integrity constraints in the target database before starting the migration process
B. Use AWS DMS to run the conversion report for Oracle to Aurora MySQ
C. Remediate any issues Then use AWS DMS to migrate the data
D. Use the M5 or CS DMS replication instance type for ongoing replication
E. Turn off automatic backups and logging of the target database until the migration and cutover processes are complete
Answer: B
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
NEW QUESTION 14
- (Exam Topic 1)
A company provides a centralized Amazon EC2 application hosted in a single shared VPC. The centralized application must be accessible from client applications
running in the VPCs of other business units. The centralized application front end is configured with a Network Load Balancer (NLB) for scalability.
Up to 10 business unit VPCs will need to be connected to the shared VPC. Some of the business unit VPC CIDR blocks overlap with the shared VPC. and some
overlap with each other. Network connectivity to the centralized application in the shared VPC should be allowed from authorized business unit VPCs only.
Which network configuration should a solutions architect use to provide connectivity from the client applications in the business unit VPCs to the centralized
application in the shared VPC?
Answer: B
Explanation:
Amazon Transit Gateway doesn’t support routing between Amazon VPCs with overlapping CIDRs. If you attach a new Amazon VPC that has a CIDR which
overlaps with an already attached Amazon VPC, Amazon Transit Gateway will not propagate the new Amazon VPC route into the Amazon Transit Gateway route
table.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#client-ip-pre
NEW QUESTION 19
- (Exam Topic 1)
A company is migrating an application to AWS. It wants to use fully managed services as much as possible during the migration. The company needs to store
large, important documents within the application with the following requirements:
* 1. The data must be highly durable and available.
* 2. The data must always be encrypted at rest and in transit.
* 3. The encryption key must be managed by the company and rotated periodically.
Which of the following solutions should the solutions architect recommend?
Answer: B
Explanation:
Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS for object encryption.
NEW QUESTION 22
- (Exam Topic 1)
A solutions architect is building a web application that uses an Amazon RDS for PostgreSQL DB instance The DB instance is expected to receive many more
reads than writes. The solutions architect needs to ensure that the large amount of read traffic can be accommodated and that the DB instance is highly available.
Which steps should the solutions architect take to meet these requirements? (Select THREE)
A. Create multiple read replicas and put them into an Auto Scaling group.
B. Create multiple read replicas in different Availability Zones.
C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy.
D. Create an Application Load Balancer (ALB) and put the read replicas behind the ALB.
E. Configure an Amazon CloudWatch alarm to detect a failed read replic
F. Set the alarm to directly invoke an AWS Lambda function to delete its Route 53 record set.
G. Configure an Amazon Route 53 health check for each read replica using its endpoint
Answer: BCF
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
You can use Amazon Route 53 weighted record sets to distribute requests across your read replicas. Within a Route 53 hosted zone, create individual record sets
for each DNS endpoint associated with your read replicas and give them the same weight. Then, direct requests to the endpoint of the record set. You can
incorporate Route 53 health checks to be sure that Route 53 directs traffic away from unavailable read replicas
NEW QUESTION 24
- (Exam Topic 1)
A solutions architect is building a web application that uses an Amazon RDS for PostgreSQL DB instance The DB instance is expected to receive many more
reads than writes The solutions architect needs to ensure that the large amount of read traffic can be accommodated and that the DB instance is highly available.
Which steps should the solutions architect take to meet these requirements? (Select THREE.)
A. Create multiple read replicas and put them into an Auto Scaling group
B. Create multiple read replicas in different Availability Zones.
C. Create an Amazon Route 53 hosted zone and a record set for each read replica with a TTL and a weighted routing policy
D. Create an Application Load Balancer (ALBJ and put the read replicas behind the ALB.
E. Configure an Amazon CloudWatch alarm to detect a failed read replica Set the alarm to directly invoke an AWS Lambda function to delete its Route 53 record
set.
F. Configure an Amazon Route 53 health check for each read replica using its endpoint
Answer: BCF
Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
You can use Amazon Route 53 weighted record sets to distribute requests across your read replicas. Within a Route 53 hosted zone, create individual record sets
for each DNS endpoint associated with your read replicas and give them the same weight. Then, direct requests to the endpoint of the record set. You can
incorporate Route 53 health checks to be sure that Route 53 directs traffic away from unavailable read replicas
NEW QUESTION 27
- (Exam Topic 1)
A company is moving a business-critical multi-tier application to AWS. The architecture consists of a desktop client application and server infrastructure. The server
infrastructure resides in an on-premises data center that frequently fails to maintain the application uptime SLA of 99.95%. A solutions architect must re-architect
the application to ensure that it can meet or exceed the SLA.
The application contains a PostgreSQL database running on a single virtual machine. The business logic and presentation layers are load balanced between
multiple virtual machines. Remote users complain about slow load times while using this latency-sensitive application.
Which of the following will meet the availability requirements with little change to the application while improving user experience and minimizing costs?
A. Migrate the database to a PostgreSQL database in Amazon EC2. Host the application and presentation layers in automatically scaled Amazon ECS containers
behind an Application Load Balance
B. Allocate an Amazon Workspaces Workspace for each end user to improve the user experience.
C. Migrate the database to an Amazon RDS Aurora PostgreSQL configuratio
D. Host the application and presentation layers in an Auto Scaling configuration on Amazon EC2 instances behind an Application Load Balance
E. Use Amazon AppStream 2.0 to improve the user experience.
F. Migrate the database to an Amazon RDS PostgreSQL Mulli-AZ configuratio
G. Host the application and presentation layers in automatically scaled AWS Fargate containers behind a Network Load Balance
H. Use Amazon ElastiCache to improve the user experience.
I. Migrate the database to an Amazon Redshift cluster with at least two node
J. Combine and host the application and presentation layers in automatically scaled Amazon ECS containers behind an Application Load Balance
K. Use Amazon CloudFront to improve the user experience.
Answer: B
Explanation:
Aurora would improve availability that can replicate to multiple AZ (6 copies). Auto scaling would improve the performance together with a ALB. AppStream is like
Citrix that deliver hosted Apps to users.
NEW QUESTION 32
- (Exam Topic 1)
A company's AWS architecture currently uses access keys and secret access keys stored on each instance to access AWS services. Database credentials are
hard-coded on each instance. SSH keys for command-tine remote access are stored in a secured Amazon S3 bucket. The company has asked its solutions
architect to improve the security posture of the architecture without adding operational complexity.
Which combination of steps should the solutions architect take to accomplish this? (Select THREE.)
Answer: ACF
Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
NEW QUESTION 33
- (Exam Topic 1)
A company wants to migrate a 30 TB Oracle data warehouse from on premises to Amazon Redshift The company used the AWS Schema Conversion Tool (AWS
SCT) to convert the schema of the existing data warehouse to an Amazon Redshift schema The company also used a migration assessment report to identify
manual tasks to complete.
The company needs to migrate the data to the new Amazon Redshift cluster during an upcoming data freeze period of 2 weeks The only network connection
between the on-premises data warehouse and AWS is a 50 Mops internet connection
Which migration strategy meets these requirements?
Answer: D
Explanation:
AWS Database Migration Service (AWS DMS) can use Snowball Edge and Amazon S3 to migrate large databases more quickly than by other methods
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
https://www.calctool.org/CALC/prof/computing/transfer_time
NEW QUESTION 36
- (Exam Topic 1)
A start up company hosts a fleet of Amazon EC2 instances in private subnets using the latest Amazon Linux 2 AMI. The company's engineers rely heavily on SSH
access to the instances for troubleshooting.
The company's existing architecture includes the following:
• A VPC with private and public subnets, and a NAT gateway
• Site-to-Site VPN for connectivity with the on-premises environment
• EC2 security groups with direct SSH access from the on-premises environment
The company needs to increase security controls around SSH access and provide auditing of commands executed by the engineers.
Which strategy should a solutions architect use?
A. Install and configure EC2 instance Connect on the fleet of EC2 instance
B. Remove all security group rules attached to EC2 instances that allow inbound TCP on port 22. Advise the engineers to remotely access the instances by using
the EC2 Instance Connect CLI.
C. Update the EC2 security groups to only allow inbound TCP on port 22 to the IP addresses of the engineer's device
D. Install the Amazon CloudWatch agent on all EC2 instances and send operating system audit logs to CloudWatch Logs.
E. Update the EC2 security groups to only allow inbound TCP on port 22 to the IP addresses of the engineer's device
F. Enable AWS Config for EC2 security group resource change
G. Enable AWS Firewall Manager and apply a security group policy that automatically remediates changes to rules.
H. Create an IAM role with the Ama2onSSMManagedlnstanceCore managed policy attache
I. Attach the IAM role to all the EC2 instance
J. Remove all security group rules attached to the EC2
K. instances that allow inbound TCP on port 22. Have the engineers install the AWS Systems Manager Session Manager plugin for their devices and remotely
access the instances by using the start-session API call from Systems Manager.
Answer: B
NEW QUESTION 40
- (Exam Topic 1)
A finance company hosts a data lake in Amazon S3. The company receives financial data records over SFTP each night from several third parties. The company
runs its own SFTP server on an Amazon EC2 instance in a public subnet of a VPC. After the files ate uploaded, they are moved to the data lake by a cron job that
runs on the same instance. The SFTP server is reachable on DNS sftp.examWe.com through the use of Amazon Route 53.
What should a solutions architect do to improve the reliability and scalability of the SFTP solution?
Answer: B
NEW QUESTION 45
- (Exam Topic 1)
A company is hosting a single-page web application in the AWS Cloud. The company is using Amazon CloudFront to reach its goal audience. The CloudFront
distribution has an Amazon S3 bucket that is configured as its origin. The static files for the web application are stored in this S3 bucket.
The company has used a simple routing policy to configure an Amazon Route 53 A record The record points to the CloudFront distribution The company wants to
use a canary deployment release strategy for new versions of the application.
What should a solutions architect recommend to meet these requirements?
A. Create a second CloudFront distribution for the new version of the applicatio
B. Update the Route 53 record to use a weighted routing policy.
C. Create a Lambda@Edge functio
D. Configure the function to implement a weighting algorithm and rewrite the URL to direct users to a new version of the application.
E. Create a second S3 bucket and a second CloudFront origin for the new S3 bucket Create a CloudFrontorigin group that contains both origins Configure origin
weighting for the origin group.
F. Create two Lambda@Edge function
G. Use each function to serve one of the application versions Set up a CloudFront weighted Lambda@Edge invocation policy
Answer: A
NEW QUESTION 49
- (Exam Topic 2)
A life sciences company is using a combination of open source tools to manage data analysis workflows and Docker containers running on servers in its on-
premises data center to process genomics data Sequencing data is generated and stored on a local storage area network (SAN), and then the data is processed.
The research and development teams are running into capacity issues and have decided to re-architect their genomics analysis platform on AWS to scale based
on workload demands and reduce the turnaround time from weeks to days
The company has a high-speed AWS Direct Connect connection Sequencers will generate around 200 GB of data for each genome, and individual jobs can take
several hours to process the data with ideal compute capacity. The end result will be stored in Amazon S3. The company is expecting 10-15 job requests each day
Which solution meets these requirements?
A. Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS When AWS receives the Snowball Edge device and the data is
loaded into Amazon S3 use S3 events to trigger an AWS Lambda function to process the data
B. Use AWS Data Pipeline to transfer the sequencing data to Amazon S3 Use S3 events to trigger an Amazon EC2 Auto Scaling group to launch custom-AMI EC2
instances running the Docker containers to process the data
C. Use AWS DataSync to transfer the sequencing data to Amazon S3 Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions
workflow Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the container and process the sequencing
data
D. Use an AWS Storage Gateway file gateway to transfer the sequencing data to Amazon S3 Use S3 events to trigger an AWS Batch job that runs on Amazon
EC2 instances running the Docker containers to process the data
Answer: C
NEW QUESTION 50
- (Exam Topic 2)
A company is in the process of implementing AWS Organizations to constrain its developers to use only Amazon EC2. Amazon S3 and Amazon DynamoDB. The
developers account resides In a dedicated organizational unit (OU). The solutions architect has implemented the following SCP on the developers account:
When this policy is deployed, IAM users in the developers account are still able to use AWS services that are not listed in the policy. What should the solutions
architect do to eliminate the developers' ability to use services outside the scope of this policy?
A. Create an explicit deny statement for each AWS service that should be constrained
B. Remove the Full AWS Access SCP from the developer account's OU
C. Modify the Full AWS Access SCP to explicitly deny all services
D. Add an explicit deny statement using a wildcard to the end of the SCP
Answer: B
NEW QUESTION 53
- (Exam Topic 2)
A company hosts a blog post application on AWS using Amazon API Gateway. Amazon DynamoDB, and AWS Lambda The application currently does not use API
keys to authorize requests The API model is as follows:
GET /posts/Jpostld) to get post details
GET /users/{userld}. to get user details
GET /comments/{commentld}: to get comments details
The company has noticed users are actively discussing topics in the comments section, and the company wants to increase user engagement by making the
comments appear in real time
Which design should be used to reduce comment latency and improve user experience?
Answer: C
NEW QUESTION 55
- (Exam Topic 2)
A company's solution architect is designing a diasaster recovery (DR) solution for an application that runs on AWS. The application uses PostgreSQL 11.7 as its
database. The company has an PRO of 30 seconds. The solutions architect must design a DR solution with the primary database in the us-east-1 Region and the
database in the us-west-2 Region.
What should the solution architect do to meet these requirements with minimum application change?
A. Migrate the database to Amazon RDS for PostgreSQL in us-east-1. Set up a read replica up a read replica in us-west-2. Set the managed PRO for the RDS
database to 30 seconds.
B. Migrate the database to Amazon for PostgreSQL in us-east-1. Set up a standby replica in an Availability Zone in us-west-2, Set the managed PRO for the RDS
database to 30 seconds.
C. Migrate the database to an Amazon Aurora PostgreSQL global database with the primary Region as us-east-1 and the secondary Region as us-west-2. Set the
managed PRO for the Aurora database to 30 seconds.
D. Migrate the database to Amazon DynamoDB in us-east-1. Set up global tables with replica tables that are created in us-west-2.
Answer: A
NEW QUESTION 56
- (Exam Topic 2)
A company is using an Amazon CloudFront distribution to distribute both static and dynamic content from a web application running behind an Application Load
Balancer The web application requires user authorization and session tracking tor dynamic content The CloudFront distribution has a single cache behavior
configured to forward the Authorization, Host, and Agent HTTP allow list headers and a session cookie to the origin All other cache behavior settings are set to
their default value
A valid ACM certificate is applied to the CloudFront distribution with a matching CNAME in the distribution settings The ACM certificate is also applied to the
HTTPS listener for the Application Load Balancer The CloudFront origin protocol policy is set to HTTPS only Analysis of the cache statistics report shows that the
miss rate for this distribution is very high
What can the solutions architect do to improve the cache hit rate for this distribution without causing the SSL/TLS handshake between CloudFront and the
Application Load Balancer to fail?
A. Create two cache behaviors for static and dynamic content Remove the user-Agent and Host HTTP headers from the allow list headers section on both of the
cache behaviors Remove the session cookie from the allow list cookies section and the Authorization HTTP header from the allow list headers section for cache
behavior configured for static content
B. Remove the user-Agent and Authorization HTTP headers from the allow list headers section of the cache behaviou
C. Then update the cache behaviour to use resigned cookies for authorization
D. Remove the Host HTTP header from the allow list headers section and remove the session cookie from the allow list cookies section for the default cache
behaviour Enable automatic object compression and use Lambda@Edge viewer request events for user authorization
E. Create two cache behaviours for static and dynamic content Remove the User-Agent HTTP header from the allow list headers section on both of the cache
behavioursRemove the session cookie from the allow list cookies section and the Authorization HTTP header from the allow list headers section for cache
behaviour configured for static content
Answer: D
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/understanding-the-cache-key.html Removing the host header will result in failed flow
between CloudFront and ALB, because they have same certificate.
NEW QUESTION 60
- (Exam Topic 2)
A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux and Windows. The company has a large on-premises intra
structure that consists of physical machines and VMs that host numerous applications.
The company must capture details about the system configuration. system performance. running processure and network coi.net lions of its o. -premises ,on
boards. The company also must divide the on-premises applications into groups for AWS migrations. The company needs recommendations for Amazon EC2
instance types so that the company can run its workloads on AWS in the most cost-effective manner.
Which combination of steps should a solutions architect take to meet these requirements? (Select THREE.)
A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs.
B. Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs
C. Group servers into applications for migration by using AWS Systems Manager Application Manager.
D. Group servers into applications for migration by using AWS Migration Hub.
E. Generate recommended instance types and associated costs by using AWS Migration Hub.
F. Import data about server sizes into AWS Trusted Adviso
G. Follow the recommendations for cost optimization.
Answer: BDF
NEW QUESTION 61
- (Exam Topic 2)
A company has more than 10.000 sensors that send data to an on-premises Apache Kafka server by using the Message Queuing Telemetry Transport (MQTT)
protocol . The on-premises Kafka server transforms the data and then stores the results as objects in an Amazon S3 bucket
Recently, the Kafka server crashed. The company lost sensor data while the server was being restored A solutions architect must create a new design on AWS
that is highly available and scalable to prevent a similar occurrence
Which solution will meet these requirements?
A. Launch two Amazon EC2 instances to host the Kafka server in an active/standby configuration across two Availability Zone
B. Create a domain name in Amazon Route 53 Create a Route 53 failover policy Route the sensors to send the data to the domain name
C. Migrate the on-premises Kafka server to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create a Network Load Balancer (NLB) that points to
the Amazon MSK broke
D. Enable NLB health checks Route the sensors to send the data to the NLB.
E. Deploy AWS loT Core, and connect it to an Amazon Kinesis Data Firehose delivery stream Use an AWS Lambda function to handle data transformation Route
the sensors to send the data to AWS loT Core
F. Deploy AWS loT Core, and launch an Amazon EC2 instance to host the Kafka server Configure AWS loT Core to send the data to the EC2 instance Route the
sensors to send the data to AWSIoT Core.
Answer: A
NEW QUESTION 62
- (Exam Topic 2)
A company has a new security policy. The policy requires the company to log any event that retrieves data from Amazon S3 buckets. The company must save
these audit logs in a dedicated S3 bucket. The company created the audit logs S3 bucket in an AWS account that is designated for centralized logging. The S3
bucket has a bucket policy that allows write-only cross-account access A solutions architect must ensure that all S3 object-level access is being logged for current
S3 buckets and future S3 buckets. Which solution will meet these requirements?
Answer: D
NEW QUESTION 65
- (Exam Topic 2)
A company has an organization in AWS Organizations that has a large number of AWS accounts. One of the AWS accounts is designated as a transit account and
has a transit gateway that is shared with all of the other AWS accounts AWS Site-to-Site VPN connections are configured between ail of the company's global
offices and the transit account The company has AWS Config enabled on all of its accounts.
The company's networking team needs to centrally manage a list of internal IP address ranges that belong to the global offices Developers Will reference this list to
gain access to applications securely.
Which solution meets these requirements with the LEAST amount of operational overhead?
A. Create a JSON file that is hosted in Amazon S3 and that lists all of the internal IP address ranges Configure an Amazon Simple Notification Service (Amazon
SNS) topic in each of the accounts that can be involved when the JSON file is update
B. Subscribe an AWS Lambda function to the SNS topic to update all relevant security group rules with Vie updated IP address ranges.
C. Create a new AWS Config managed rule that contains all of the internal IP address ranges Use the rule to check the security groups in each of the accounts to
ensure compliance with the list of IP address range
D. Configure the rule to automatically remediate any noncompliant security group that is detected.
E. In the transit account, create a VPC prefix list with all of the internal IP address range
F. Use AWS Resource Access Manager to share the prefix list with all of the other account
G. Use the shared prefix list to configure security group rules is the other accounts.
H. In the transit account create a security group with all of the internal IP address range
I. Configure the security groups in me other accounts to reference the transit account's securitygroup by using a nested security group reference of *<transit-
account-id>./sg-1a2b3c4d".
Answer: C
NEW QUESTION 68
- (Exam Topic 2)
A solutions architect uses AWS Organizations to manage several AWS accounts for a company. The full Organizations feature set is activated for the organization.
All production AWS accounts exist under an OU that is named "production ‘’ Systems operators have full administrative privileges within these accounts by using
IAM roles.
The company wants to ensure that security groups in all production accounts do not allow inbound traffic for TCP port 22. All noncompliant security groups must be
remediated immediately, and no new rules that allow port 22 can be created.
Winch solution will meet these requirements?
A. Write an SCP that denies the CreateSecurityGroup action with a condition o( ec2:tngress rule with value 22. Apply the SCP to the 'production' OU.
B. Configure an AWS CloudTrail trail for all accounts Send CloudTrail logs to an Amazon S3 bucket In the Organizations management accoun
C. Configure an AWS Lambda function on the management account with permissions to assume a role in all production accounts to describe and modify security
group
D. Configure Amazon S3 to invoke the Lambda function on every PutObject event on the S3 bucket Configure the Lambda function to analyze each CloudTrail
event for noncompliant security group actions and to automatically remediate any issues.
E. Create an Amazon EvertBridge (Amazon CloudWatch Events) event bus in the Organizations management accoun
F. Create an AWS Cloud Formation template to deploy configurations that send CreateSecurityGroup events to the even! bus from an production accounts
Configure an AWS Lambda function in the management account with permissions to assume a role «i all production accounts to describe and modify security
group
G. Configure the event bus to invoke the Lambda function Configure the Lambda function to analyse each event for noncompliant security group actions and to
automatically remediate any issues.
H. Create an AWS CloudFormation template to turn on AWS Config Activate the INCOMING_SSH_DISABLED AWS Config managed rule Deploy an AWS
Lambda function that will run based on AWS Config findings and will remediate noncompliant resources Deploy the CloudFormation template by using a StackSet
that is assigned to the "production" O
I. Apply an SCP to the OU to deny modification of the resources that the CloudFormation template provisions.
Answer: D
NEW QUESTION 71
- (Exam Topic 2)
A large company recently experienced an unexpected increase in Amazon RDS and Amazon DynamoDB costs The company needs to increase visibility into
details of AWS Billing and Cost Management There are various accounts associated with AWS Organizations, including many development and production
accounts. There is no consistent tagging strategy across the organization, but there are guidelines in place that require all infrastructure to be deployed using AWS
Cloud Formation with consistent tagging Management requires cost center numbers and project ID numbers for all existing and future DynamoDB tables and RDS
instances
Which strategy should the solutions architect provide to meet these requirements?
A. Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing
resources
B. Use an AWS Config rule to alert the finance team of untagged resources Create a centralized AWS Lambda based solution to tag untagged RDS databases
and DynamoDB resources every hour using a cross-account rote.
C. Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID Use SCPs to restrict resource creation that do not
have the cost center and project ID on the resource.
D. Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources Update existing federated
roles to restrict privileges to provision resources that do not include the cost center and project ID on the resource
Answer: B
NEW QUESTION 76
- (Exam Topic 2)
A company recently started hosting new application workloads in the AWS Cloud. The company is using Amazon EC2 instances. Amazon Elastic File System
(Amazon EFS) file systems, and Amazon RDS DB instances.
To meet regulatory and business requirements, the company must make the following changes for data backups:
• Backups must be retained based on custom daily, weekly, and monthly requirements.
• Backups must be replicated to at least one other AWS Region immediately after capture.
• The backup solution must provide a single source of backup status across the AWS environment.
• The backup solution must send immediate notifications upon failure of any resource backup.
Which combination of steps will meet these requirements with the LEAST amount of operational overhead? (Select THREE.)
A. Create an AWS Backup plan with a backup rule for each of the retention requirements.
B. Configure an AWS Backup plan to copy backups to another Region.
C. Create an AWS Lambda function to replicate backups to another Region and send notification if a failure occurs.
D. Add an Amazon Simple Notification Service (Amazon SNS) topic to the backup plan to send a notification for finished jobs that have any status except
BACKUP_JOB_COMPLETEO.
E. Create an Amazon Data Lifecycle Manager (Amazon DLM) snapshot lifecycle policy for each of the retention requirements.
F. Set up RDS snapshots on each database.
Answer: BDE
NEW QUESTION 81
- (Exam Topic 2)
A company is creating a sequel for a popular online game. A large number of users from all over the world will play the game within the first week after launch.
Currently, the game consists of the following components deployed in a single AWS Region:
• Amazon S3 bucket that stores game assets
• Amazon DynamoDB table that stores player scores
A solutions architect needs to design a Region solution that wifi reduce latency improve reliability, and require the least effort to implement
What should the solutions architect do to meet these requirements'
A. Create an Amazon CloudFront distribution to serve assets from the S3 bucket Configure S3Cross-Region Replication Create a new DynamoDB able in a new
Region Use the new table as a replica target tor DynamoDB global tables.
B. Create an Amazon CloudFront distribution to serve assets from the S3 bucke
C. Configure S3Same-Region Replicatio
D. Create a new DynamoDB able m a new Regio
E. Configure asynchronous replication between the DynamoDB tables by using AWS Database Migration Service (AWS DMS) with change data capture (CDC)
F. Create another S3 bucket in a new Region and configure S3 Cross-Region Replication between the buckets Create an Amazon CloudFront distribution and
configure origin failover with two origins accessing the S3 buckets in each Regio
G. Configure DynamoDB global tables by enabling Amazon DynamoDB Streams, and add a replica table in a new Region.
H. Create another S3 bucket in the same Region, and configure S3 Same-Region Replication between the buckets- Create an Amazon CloudFront distribution and
configure origin failover with two origin accessing the S3 buckets Create a new DynamoDB table m a new Region Use the new table as a replica target for
DynamoDB global tables.
Answer: B
NEW QUESTION 86
- (Exam Topic 2)
A company is configuring connectivity to a multi-account AWS environment to support application workloads fiat serve users in a single geographic region. The
workloads depend on a highly available, on-premises legacy system deployed across two locations It is critical for the AWS workloads to manias connectivity to the
legacy system, and a minimum of 5 Gbps of bandwidth is required All application workloads within AWS must have connectivity with one another.
Which solution will meet these requirements?
A. Configure multiple AWS Direct Connect (OX) 10 Gbps dedicated connections from a DX partner for each on-premises location Create private virtual interfaces
on each connection for each AWS account VPC Associate me private virtual interface with a virtual private gateway attached to each VPC
B. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location Create and attach a virtual
private gateway for each AWS account VP
C. Create a DX gateway m a central network account and associate it with the virtual private gateways Create a public virtual interface on each DX connection and
associate the interface with me DX gateway.
D. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location Create a transit gateway and
a DX gateway in a central network accoun
E. Create a transit virtual interface for each DX interlace and associate them with the DX gatewa
F. Create a gateway association between the DX gateway and the transit gateway
G. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from a DX partner for each on-premises location Create and attach a virtual
private gateway for each AWS account VP
H. Create a transit gateway in a central network account and associate It with the virtual private gateways Create a transit virtual interface on each DX connection
and attach the interface to the transit gateway.
Answer: B
NEW QUESTION 90
- (Exam Topic 2)
A company's CI SO has asked a solutions architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its application can
happen as quickly as possible with minimal downtime if vulnerabilities are discovered The company must also be able to quickly roll back a change in case of
errors.
The web application is deployed in a fleet of Amazon EC2 instances behind an Application Load Balancer The company is currently using GitHub to host the
application source code. and has configured an AWS CodeBuild project to build the application The company also intends to use AWS CodePipeline to trigger
builds from GitHub commits using the existing CodeBuild project.
What CI/CD configuration meets all of the requirements?
A. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for in-place deployment Monitor the newly deployed code, and, if there are any
issues, push another code update
B. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployments Monitor the newly deployed code and if there are
any issues, trigger a manual rollback using CodeDeploy
C. Configure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stacks Monitor the newly deployed code,
and, if there are any issues, push another code update
D. Configure the CodePipeline with a deploy stage using AWS OpsWorks and m-place deployments Monitor the newly deployed code an
E. if there are any issues, push another code update
Answer: B
NEW QUESTION 92
- (Exam Topic 2)
A company is using multiple AWS accounts The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A The company's applications
and databases are running in Account B.
A solutions architect win deploy a two-net application In a new VPC To simplify the configuration, the db.example com CNAME record set tor the Amazon RDS
endpoint was created in a private hosted zone for Amazon Route 53.
During deployment, the application failed to start. Troubleshooting revealed that db.example com is not resolvable on the Amazon EC2 instance The solutions
architect confirmed that the record set was created correctly in Route 53.
Which combination of steps should the solutions architect take to resolve this issue? (Select TWO J
A. Deploy the database on a separate EC2 instance in the new VPC Create a record set for the instance's private IP in the private hosted zone
B. Use SSH to connect to the application tier EC2 instance Add an RDS endpoint IP address to the/eto/resolv.conf file
C. Create an authorization lo associate the private hosted zone in Account A with the new VPC In Account B
D. Create a private hosted zone for the example.com domain m Account B Configure Route 53 replicationbetween AWS accounts
E. Associate a new VPC in Account B with a hosted zone in Account
F. Delete the association authorization In Account A.
Answer: CE
NEW QUESTION 94
- (Exam Topic 2)
A company's solutions architect is reviewing a web application that runs on AWS. The application references static assets in an Amazon S3 bucket in the us-east-1
Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: D
NEW QUESTION 99
- (Exam Topic 2)
A company runs an loT platform on AWS loT sensors in various locations send data to the company's Node js API servers on Amazon EC2 instances running
behind an Application Load Balancer The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General Purpose SSD volume
The number of sensors the company has deployed in the field has increased over time and is expected to grow significantly The API servers are consistently
overloaded and RDS metrics show high write latency
Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this platform cost-
efficient? {Select TWO.)
A. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume's IOPS
B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas
C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data
D. Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load
E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance
Answer: CE
company requires all Amazon EC2 instances to be provisioned with custom, hardened AMIs. The company wants a solution that provides each AWS account
access to the AMIs
Which solution will meet these requirements with the MOST operational efficiency?
A. Create the AMIs with EC2 Image Builder Create an AWS CodePipeline pipeline to share the AMIs across all AWS accounts.
B. Deploy Jenkins on an EC2 instance Create jobs to create and share the AMIs across all AWS accounts.
C. Create and share the AMIs with EC2 Image Builder Use AWS Service Catalog to configure a product that provides access to the AMIs across all AWS
accounts.
D. Create the AMIs with EC2 Image Builder Create an AWS Lambda function to share the AMIs across all AWS accounts.
Answer: C
Answer: B
A. Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the us-east-1 VP
B. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in both VPCs Create an Auto Scaling group that deploys the EC2
instances across the multiple Availability Zones in both VPCs Place the Auto Scaling group behind the ALB.
C. Create a VPC in us-east-1 and a VPC in us-west-1. In the us-east-1 VP
D. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VP
E. Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in the us-east-1 VPC Place the Auto Scaling group behind
the ALB Set up the same configuration in the us-west-1 VP
F. Create an Amazon Route 53 hosted zone Create separate records for each ALB Enable health checks to ensure high availability between Regions.
G. Create a VPC in us-east-1 and a VPC in us-west-1 In the us-east-1 VP
H. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VPC Create an Auto Scaling group that deploys the EC2
instances across the multiple Availability Zones in the us-east-1 VPC Place the Auto Scaling group behind the ALB Set up the same configuration in the us-west-1
VPC Create an Amazon Route 53 hosted zon
I. Create separate records for each ALB Enable health checks and configure a failover routing policy for each record.
J. Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the us-east-1 VP
K. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in Create an Auto Scaling group that deploys the EC2 instances
across the multiple Availability Zones in both VPCs Place the Auto Scaling group behind the ALB Create an Amazon Route 53 host.. Create a record for the ALB.
Answer: C
A. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs once every day Configure the rule to invoke one AWS Lambda function that starts
or stops instances based on the tag day and time.
B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs every business day in the evenin
C. Configure the rule to invoke an AWS Lambda function that stops instances based on thetag-Create a second EventBridge (CloudWatch Events) rule that runs
every business day in the morning Configure the second rule to invoke another Lambda function that starts instances based on the tag
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs every business day in the evening Configure the rule to invoke an AWS Lambda
function that terminates instances based on the tag Create a second EventBridge (CloudWatch Events) rule that runs every business day in the morning Configure
the second rule to invoke another Lambda function that restores the instances from their last backup based on the tag.
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs every hour Configure the rule to invoke one AWS Lambda function that terminates
or restores instances from their ....based on the ta
F. day, and time
Answer: C
Answer: D
Answer: AE
A. Implement retry logic with exponential backoff and irregular variation in the client applicatio
B. Ensure that the errors are caught and handled with descriptive error messages.
C. Implement API throttling through a usage plan at the API Gateway leve
D. Ensure that the client application handles code 429 replies without error.
E. Turn on API caching to enhance responsiveness for the production stag
F. Run 10-minute load tests.Verify that the cache capacity is appropriate for the workload.
G. Implement reserved concurrency at the Lambda function level to provide the resources that are needed during sudden increases in traffic.
Answer: A
A. Create an AWS Site-to-Site VPN connection between the VPC and the API Gateway Use API Gateway to generate a unique API key for each microservic
B. Configure the API methods to require the key.
C. Create an interface VPC endpoint for API Gateway, and set an endpoint policy to only allow access to the specific API Add a resource policy to API Gateway to
only allow access from the VPC endpoint Change the API Gateway endpoint type to private.
D. Modify the API Gateway to use IAM authentication Update the IAM policy for the IAM role that isassigned to the EC2 instances to allow access to the API
Gateway Move the API Gateway into a new VPC Deploy a transit gateway and connect the VPCs.
E. Create an accelerator in AWS Global Accelerator and connect the accelerator to the API Gateway.Update the route table for all VPC subnets with a route to the
created Global Accelerator endpoint IP addres
F. Add an API key for each service to use for authentication.
Answer: B
A. Create a new Network Load Balancer (NLB) in the same subnets as the Fargate task deployments.Create a security group that includes only the client IP
addresses that need access to the AP
B. Attach the new security group to the Fargate task
C. Provide the security team with the NLB's IP addresses for the allow list.
D. Create two new /27 subnet
E. Create a new Application Load Balancer (ALB) that extends across the new subnet
F. Create a security group that includes only the client IP addresses that need access to the AP
G. Attach the security group to the AL
H. Provide the security team with the new subnet IP ranges for the allow list.
I. Create two new '27 subnet
J. Create a new Network Load Balancer (NLB) that extends across the new subnet
K. Create a new Application Load Balancer (ALB) within the new subnet
L. Create a security group that includes only the client IP addresses that need access to the AP
M. Attach the security group to the AL
N. Add the ALB's IP addresses as targets behind the NL
O. Provide the security team with the NLB's IP addresses for the allow list.
P. Create a new Application Load Balancer (ALB) in the same subnets as the Fargate task deployments.Create a security group that includes only the client IP
addresses that need access to the AP
Q. Attach the security group to the AL
R. Provide the security team with the ALB's IP addresses for the allow list.
Answer: A
A. The customer should provide the partner company with their AWS account access keys to log in and perform the required tasks
B. The customer should create an IAM user and assign the required permissions to the IAM user The customer should then provide the credentials to the partner
company to log In and perform the required tasks.
C. The customer should create an IAM role and assign the required permissions to the IAM rol
D. The partner company should then use the IAM rote's Amazon Resource Name (ARN) when requesting access to perform the required tasks
E. The customer should create an IAM rote and assign the required permissions to the IAM rot
F. The partner company should then use the IAM rote's Amazon Resource Name (ARN). Including the external ID in the IAM role's trust pokey, when requesting
access to perform the required tasks
Answer: D
A. The IAM user's permissions pokey has allowed the use of SAML federation for that user
B. The IAM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principle.
C. Test users are not in the AWSFederatedUsers group in the company's IdP
D. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider the ARN of the IAM role, and the SAML assertion from IdP
E. The on-premises IdP's DNS hostname is reachable from the AWS environment VPCs.
F. The company's IdP defines SAML assertions that property map users or groups m the company to IAM roles with appropriate permissions
Answer: BCF
A. Update the AWS CloudFormation template to include the AWS Budgets Budget resource with the NotificationsWithSubscnbers property
B. Implement Amazon CloudWatch dashboards for Amazon EMR usage
C. Create an EMR bootstrap action that runs at startup that calls the Cost Explorer API to set the budget onthe cluster with the GetCostForecast and
NotificationsWithSubscnbers actions
D. Create an AWS Service Catalog portfolio for each tea
E. Add each team's Amazon EMR cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product
F. Create an Amazon CloudWatch metric for billing Create a custom alert when costs exceed the budgetary threshold.
Answer: BE
A. Modify the EMR cluster by turning on automatic scaling of the core nodes and task nodes with a custom policy that is based on cluster utilization Purchase
Reserved Instance capacity to cover the master node.
B. Modify the EMR cluster to use an instance fleet of Dedicated On-Demand Instances for the master node and core nodes, and to use Spot Instances for the task
node
C. Define target capacity for each node type to cover the load.
D. Purchase Reserved Instances for the master node and core nodes Terminate all existing task nodes in the EMR cluster
E. Modify the EMR cluster to use capacity-optimized Spot Instances and a diversified task flee
F. Define target capacity for each node type with a mix of On-Demand Instances and Spot Instances.
Answer: B
A. Provision an Amazon EMR duster Offload the complex data processing tasks
B. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the duster's CPU metrics in Amazon
CloudWatch reach 80%.
C. Deploy an AWS Lambda function to add capacity to the Amazon Redshift duster by using an elastic resize operation when the duster's CPU metrics in Amazon
CloudWatch leach 80%.
D. Turn on the Concurrency Scaling feature for the Amazon Redshift duster
Answer: D
A. Refactor the web application to use the Amazon Kinesis Producer Library (KPL) when posting bids to Kinesis Data Streams Refactor the bid processor to flag
each record in Kinesis Data Streams as being unread processing and processed At the start of each bid processing run; scan Kinesis Data Streams for
unprocessed records
B. Refactor the web application to post each incoming bid to an Amazon SNS topic in place of Kinesis Data Streams Configure the SNS topic to trigger an AWS
Lambda function that
C. processes each bid as soon as a user submits it
D. Refactor the web application to post each incoming bid to an Amazon SQS FIFO queue in place of Kinesis Data Streams Refactor the bid processor to
continuously consume the SQS queue Place the bid processing EC2 instance in an Auto Scaling group with a minimum and a maximum size of 1
E. Switch the EC2 instance type from t2 large to a larger general compute instance type Put the bid processor EC2 instances in an Auto Scaling group that scales
out the number of EC2 instances running the bid processor based on the incomingRecords metric in Kinesis Data Streams
Answer: C
Explanation:
https://aws.amazon.com/sqs/faqs/#:~:text=A%20single%20Amazon%20SQS%20message,20%2C000%20for%2
A. Turn on the cross-origin resource sharing (CORS) feature for the S3 bucket in Account A.
B. In Account
C. set the S3 bucket policy to the following.
D. In Account A, set the S3 bucket policy to the following: Text, letter Description automatically generated
E. InAccount B, set the permissions of User_DataProcessor to the following:Text Description automatically generated
F. InAccount B, set the permissions of User_DataProcessor to the following:Text, letter Description automatically generated
Answer: AD
A. Run a script nightly using AWS Systems Manager Run Command to search for credentials on the development instances If found use AWS Secrets Manager to
rotate the credentials.
B. Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit If credentials are found, generate new credentials and
store them in AWS KMS
C. Configure Amazon Macie to scan for credentials in CodeCommit repositories If credentials are found, trigger an AWS Lambda function to disable the credentials
and notify the user
D. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials If credentials are found, disable them in
AWS IAM and notify the user.
Answer: A
A. Set up a transit gateway in each Region Establish a redundant AWS Site-to-Site VPN connection between the on-premises firewalls and the transit gateway in
the Region that is closest to theon-premises network Peer all the transit gateways with each other Connect all the VPCs to the transitgateway in their Region
B. Create an AWS CloudFormation template for a redundant AWS Site-to-Site VPN tunnel to theon-premises network Deploy the CloudFormation template for
each VPC Set up VPC peering between all the VPCs for VPC-to-VPC communication
C. Set up a transit gateway in each Region Establish a redundant AWS Site-to-Site VPN connection between the on-premises firewalls and each transit gateway
Route traffic between the different Regions through the company's on-premises firewalls Connect all the VPCs to the transit gateway in their Region
D. Create an AWS CloudFormation template for a redundant AWS Site-to-Site VPN tunnel to theon-premises network Deploy the CloudFormation template for
each VPC Route traffic between the different Regions through the company's on-premises firewalls
Answer: A
A. Verify that Systems Manager Agent is installed on the instance and is running
B. Verify that the instance is assigned an appropriate IAM role for Systems Manager
C. Verify the existence of a VPC endpoint on the VPC
D. Verify that the AWS Application Discovery Agent is configured
E. Verify the correct configuration of service-linked roles for Systems Manager
Answer: ABD
Answer: D
A. Use AWS Application Discovery Service to evaluate at running EC2 instances Use the AWS CLI lo modify each instance, and use EC2 user data to install the
AWS SystemsManager Agent during boot Schedule patching to run as a Systems Manager Maintenance Windowstas
B. Migrate all relational databases lo Amazon RDS and enable AWS KMS encryption
C. Create an AWS CloudFormation template for the EC2 instances Use EC2 user data in the CloudFormation template to install the AWS Systems Manager
Agent, and enable AWS KMS encryption on all Amazon EBS volume
D. Have CloudFormation replace al running instance
E. Use Systems Manager Patch Manager to establish a patch baseline and deploy a Systems Manager Maintenance Windows task to run AWS-RunPatchBaseline
using the patch baseline
F. Install the AWS Systems Manager Agent on all existing instances using the company's current orchestration tool Use the Systems Manager Run Command to
run a list of commands to upgrade software on each instance using operating system-specific tool
G. Enable AWS KMS encryption on all Amazon EBS volumes.
H. install the AWS Systems Manager Agent on all existing instances using the company's current orchestration too
I. Migrate al relational databases to Amazon RDS and enable AWS KMS encryption Use Systems Manager Patch Manager to establish a patch baseline and
deploy a Systems Manager Maintenance Windows task to run AWS-RunPatchBaseline using the patch baseline.
Answer: D
A. Ensure that the container has the environment variable with name "DB_PASSWORD" specified with a "ValueFrom" and the ARN of the secret
B. Ensure that the container has the environment variable with name *D8_PASSWORD" specified with a"ValueFrom" and the secret name of the secret.
C. Ensure that the Fargate service security group allows inbound network traffic from the Aurora MySQL database on the MySQL TCP port 3306.
D. Ensure that the Aurora MySQL database security group allows inbound network traffic from the Fargate service on the MySQL TCP port 3306.
E. Ensure that the container has the environment variable with name "D8_HOST" specified with the hostname of a DB instance endpoint.
F. Ensure that the container has the environment variable with name "DB_HOST" specified with the hostname of the OB duster endpoint.
Answer: ADE
A. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region Create a CloudWatch
alarm in the DR Region that is invoked when the application availability metric stops being delivered Configure the CloudWatch alarm to send a notification to an
Amazon Simple Notification Service (Amazon SNS> topic in the DR Region Add an email subscription to the SNS topic that sends messages to the application
owner upon notification, instruct a systems operator to sign in to the AWS Management Console and initiate failover operations for the application
B. Create a cron task that runs every 5 minutes by using one of the application's EC2 instances in the primary Region Configure the cron task to check whether the
application is available Upon failure, the cron task notifies a systems operator and attempts to restart the application services
C. Create a cron task that runs every 5 minutes by using one of the application's EC2 instances in the primary Region Configure the cron task to check whether the
application is available Upon failure, the cron task modifies the DR environment by promoting the read replica and by adding EC2 instances to the Auto Scaling
group
D. Publish an application availability metric to Amazon CloudWatch in the DR Region from the application environment in the primary Region Create a CloudWatch
alarm in the DR Region that is invoked when the application availability metric stops being delivered Configure the CloudWatch alarm to send a notification to an
Amazon Simple Notification Service (Amazon SNS) topic in the DR Region Use an AWS Lambda function that is invoked by Amazon SNS in the DR Region to
promote the read replica and to add EC2 instances to the Auto Scaling group
Answer: D
A. Deploy a second API Gateway regional API endpoint in us-east-1. Create Lambda integration with the functions in us-east-1.
B. Enable DynamoDB Streams on the table in eu-west-1. Replicate all changes to a DynamoDB table in us-east-1
C. Modify the DynamoDB table to be a global table in eu-west-1 and in us-east-1.
D. Change the API Gateway API endpoint in eu-west-1 to an edge-optimized endpoin
E. Create Lambda integration with the functions in both Regions.
F. Create a DynamoDB read replica in us-east-1.
Answer: AC
A. Back up all the data to a large Amazon EBS volume attached to the backup media server m the production regio
B. Run automated scripts to snapshot these volumes nightl
C. and copy these snapshots to the disaster recovery region.
D. Back up all the data to Amazon S3 in the disaster recovery region Use a Lifecycle policy to move this data to Amazon Glacier in the production region
immediately Only the data is replicated: remove the data from the S3 bucket in the disaster recovery region.
E. Back up all the data to Amazon Glacier in the production regio
F. Set up cross-region replication of this data to Amazon Glacier in the disaster recovery regio
G. Set up a lifecycle policy to delete any data o der than 60 days.
H. Back up all the data to Amazon S3 in the production regio
I. Set up cross-region replication of this S3 bucket to another region and set up a lifecycle policy in the second region to immediately move this data to Amazon
Glacier
Answer: D
A. Create an Amazon Simple Notification Service {Amazon SNS) topic with a subscription of type"Email" that targets the team's mailing list.
B. Create a task named "Email" that forwards the input arguments to the SNS topic
C. Add a Catch field to all Tas
D. Ma
E. and Parallel states that have a statement of "ErrorEquals": [ "states.all" ] and "Next": "Email".
F. Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address.
G. Create a task named "Email" that forwards the input arguments to the SES email address
H. Add a Catch field to all Task, Map, and Parallel states that have a statement of "ErrorEquals": [ "states.Bun time" ] and "Next": "Email".
Answer: BCD
• Support the capability to merge tested code into production code How should the solutions architect achieve these requirements?
A. Trigger a separate pipeline from CodeCommit feature branches Use AWS CodeBuild for running unit tests Use CodeBuild to stage the artifacts within an S3
bucket in a separate testing account
B. Trigger a separate pipeline from CodeCommit feature branches Use AWS Lambda for running unit tests Use AWS CodeDeploy to stage the artifacts within an
S3 bucket in a separate testing account
C. Trigger a separate pipeline from CodeCommit tags Use Jenkins for running unit tests Create a stage in the pipeline with S3 as the target for staging the artifacts
within an S3 bucket in a separate testing account.
D. Create a separate CodeCommit repository for feature development and use it to trigger the pipeline Use AWS Lambda for running unit tests Use AWS
CodeBuild to stage the artifacts within different S3 buckets in the same production account
Answer: A
A. Store media in an Amazon S3 Standard bucket Create an S3 Lifecycle configuration that transitions objects that are older than 30 days to the S3 Standard-
Infrequent Access (S3 Standard-IA) storage class.
B. Store media on an Amazon Elastic File System (Amazon EFS) volume Attach the EFS volume to all Fargate instances.
C. Store application state on an Amazon Elastic File System (Amazon EFS) volume Attach the EFS volume to all Fargate instances.
D. Store application state on an Amazon Elastic Block Store (Amazon EBS) volume Attach the EBS volume to all Fargate instances.
E. Store media in an Amazon S3 Standard bucket Create an S3 Lifecycle configuration that transitions objects that are older than 30 days to the S3 Glacier
storage class
Answer: AC
A. Purchase an EC2 Instance Savings Plan to cover the usage for the configuration part Run the simulation part by using EC2 Spot Instances Create an S3
Lifecycle policy to transition objects that are older than 30 days to S3 Intelligent-Tiering
B. Purchase an EC2 Instance Savings Plan to cover the usage for the configuration part and the simulation part Create an S3 Lifecycle policy to transition objects
that are older than 30 days to S3 Glacier
C. Purchase Compute Savings Plans to cover the usage for the configuration part Run the simulation part by using EC2 Spot instances Create an S3 Lifecycle
policy to transition objects that are older than 30 days to S3 Glacier
D. Purchase Compute Savings Plans to cover the usage for the configuration part Purchase EC2 Reserved Instances for the simulation part Create an S3
Lifecycle policy to transition objects that are older than 30 days to S3 Glacier Deep Archive
Answer: C
* SAP-C02 Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* SAP-C02 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year