0% found this document useful (0 votes)
85 views11 pages

Aif-C01 0

Uploaded by

catofaf943
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views11 pages

Aif-C01 0

Uploaded by

catofaf943
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Recommend!!

Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

Amazon-Web-Services
Exam Questions AIF-C01
AWS Certified AI Practitioner

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

NEW QUESTION 1
An AI practitioner has built a deep learning model to classify the types of materials in images. The AI practitioner now wants to measure the model performance.
Which metric will help the AI practitioner evaluate the performance of the model?

A. Confusion matrix
B. Correlation matrix
C. R2 score
D. Mean squared error (MSE)

Answer: A

Explanation:
A confusion matrix is the correct metric for evaluating the performance of a classification model, such as the deep learning model built to classify types of materials
in images.
? Confusion Matrix:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 2
A company uses a foundation model (FM) from Amazon Bedrock for an AI search tool. The company wants to fine-tune the model to be more accurate by using
the company's data.
Which strategy will successfully fine-tune the model?

A. Provide labeled data with the prompt field and the completion field.
B. Prepare the training dataset by creating a .txt file that contains multiple lines in .csv format.
C. Purchase Provisioned Throughput for Amazon Bedrock.
D. Train the model on journals and textbooks.

Answer: A

Explanation:
Providing labeled data with both a prompt field and a completion field is the correct strategy for fine-tuning a foundation model (FM) on Amazon Bedrock.
? Fine-Tuning Strategy:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 3
An AI practitioner wants to use a foundation model (FM) to design a search application. The search application must handle queries that have text and images.
Which type of FM should the AI practitioner use to power the search application?

A. Multi-modal embedding model


B. Text embedding model
C. Multi-modal generation model
D. Image generation model

Answer: A

Explanation:
A multi-modal embedding model is the correct type of foundation model (FM) for powering a search application that handles queries containing both text and
images.
? Multi-Modal Embedding Model:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 4
How can companies use large language models (LLMs) securely on Amazon Bedrock?

A. Design clear and specific prompt


B. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access.
C. Enable AWS Audit Manager for automatic model evaluation jobs.
D. Enable Amazon Bedrock automatic model evaluation jobs.
E. Use Amazon CloudWatch Logs to make models explainable and to monitor for bias.

Answer: A

Explanation:
To securely use large language models (LLMs) on Amazon Bedrock, companies should design clear and specific prompts to avoid unintended outputs and ensure
proper configuration of AWS Identity and Access Management (IAM) roles and policies with the principle of least privilege. This approach limits access to sensitive
resources and minimizes the potential impact of security incidents.
? Option A (Correct): "Design clear and specific prompts. Configure AWS Identity
and Access Management (IAM) roles and policies by using least privilege access": This is the correct answer as it directly addresses both security practices in
prompt design and access management.
? Option B: "Enable AWS Audit Manager for automatic model evaluation jobs" is
incorrect because Audit Manager is for compliance and auditing, not directly related to secure LLM usage.
? Option C: "Enable Amazon Bedrock automatic model evaluation jobs" is incorrect
because Bedrock does not provide automatic model evaluation jobs specifically for security purposes.
? Option D: "Use Amazon CloudWatch Logs to make models explainable and to
monitor for bias" is incorrect because CloudWatch Logs are used for monitoring and not directly for making models explainable or secure.

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

AWS AI Practitioner References:


? Secure AI Practices on AWS: AWS recommends configuring IAM roles and using least privilege access to ensure secure usage of AI models.

NEW QUESTION 5
A company wants to classify human genes into 20 categories based on gene characteristics. The company needs an ML algorithm to document how the inner
mechanism of the model affects the output.
Which ML algorithm meets these requirements?

A. Decision trees
B. Linear regression
C. Logistic regression
D. Neural networks

Answer: A

Explanation:
Decision trees are an interpretable machine learning algorithm that clearly documents the decision-making process by showing how each input feature affects the
output. This transparency is particularly useful when explaining how the model arrives at a certain decision, making it suitable for classifying genes into categories.
? Option A (Correct): "Decision trees": This is the correct answer because decision
trees provide a clear and interpretable representation of how input features influence the model's output, making it ideal for understanding the inner mechanisms
affecting predictions.
? Option B: "Linear regression" is incorrect because it is used for regression tasks,
not classification.
? Option C: "Logistic regression" is incorrect as it does not provide the same level of interpretability in documenting decision-making processes.
? Option D: "Neural networks" is incorrect because they are often considered "black boxes" and do not easily explain how they arrive at their outputs.
AWS AI Practitioner References:
? Interpretable Machine Learning Models on AWS: AWS supports using interpretable models, such as decision trees, for tasks that require clear documentation of
how input data affects output decisions.

NEW QUESTION 6
What does an F1 score measure in the context of foundation model (FM) performance?

A. Model precision and recall


B. Model speed in generating responses
C. Financial cost of operating the model
D. Energy efficiency of the model's computations

Answer: A

Explanation:
The F1 score is a metric used to evaluate the performance of a classification model by considering both precision and recall. Precision measures the accuracy of
positive predictions (i.e., the proportion of true positive predictions among all positive predictions made by the model), while recall measures the model's ability to
identify all relevant positive instances (i.e., the proportion of true positive predictions among all actual positive instances). The F1 score is the harmonic mean of
precision and recall, providing a single metric that balances both concerns. This is particularly useful when dealing with imbalanced datasets or when the cost of
false positives and false negatives is significant. Options B, C, and D pertain to other aspects of model performance but are not related to the F1 score.
Reference: AWS Certified AI Practitioner Exam Guide

NEW QUESTION 7
An education provider is building a question and answer application that uses a generative AI model to explain complex concepts. The education provider wants to
automatically change the style of the model response depending on who is asking the question. The education provider will give the model the age range of the
user who has asked the question.
Which solution meets these requirements with the LEAST implementation effort?

A. Fine-tune the model by using additional training data that is representative of the various age ranges that the application will support.
B. Add a role description to the prompt context that instructs the model of the age range that the response should target.
C. Use chain-of-thought reasoning to deduce the correct style and complexity for a response suitable for that user.
D. Summarize the response text depending on the age of the user so that younger users receive shorter responses.

Answer: B

Explanation:
Adding a role description to the prompt context is a straightforward way to instruct the generative AI model to adjust its response style based on the user's age
range. This method requires minimal implementation effort as it does not involve additional training or complex logic.
? Option B (Correct): "Add a role description to the prompt context that instructs the model of the age range that the response should target": This is the correct
answer because it involves the least implementation effort while effectively guiding the
model to tailor responses according to the age range.
? Option A: "Fine-tune the model by using additional training data" is incorrect because it requires significant effort in gathering data and retraining the model.
? Option C: "Use chain-of-thought reasoning" is incorrect as it involves complex reasoning that may not directly address the need to adjust response style based
on age.
? Option D: "Summarize the response text depending on the age of the user" is incorrect because it involves additional processing steps after generating the initial
response, increasing complexity.
AWS AI Practitioner References:
? Prompt Engineering Techniques on AWS: AWS recommends using prompt context effectively to guide generative models in providing tailored responses based
on specific user attributes.

NEW QUESTION 8
A company is using an Amazon Bedrock base model to summarize documents for an internal use case. The company trained a custom model to improve the
summarization quality.
Which action must the company take to use the custom model through Amazon Bedrock?

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

A. Purchase Provisioned Throughput for the custom model.


B. Deploy the custom model in an Amazon SageMaker endpoint for real-time inference.
C. Register the model with the Amazon SageMaker Model Registry.
D. Grant access to the custom model in Amazon Bedrock.

Answer: B

Explanation:
To use a custom model that has been trained to improve summarization quality, the company must deploy the model on an Amazon SageMaker endpoint. This
allows the model to be used for real-time inference through Amazon Bedrock or other AWS services. By deploying the model in SageMaker, the custom model can
be accessed programmatically via API calls, enabling integration with Amazon Bedrock.
? Option B (Correct): "Deploy the custom model in an Amazon SageMaker endpoint
for real-time inference": This is the correct answer because deploying the model on SageMaker enables it to serve real-time predictions and be integrated with
Amazon Bedrock.
? Option A: "Purchase Provisioned Throughput for the custom model" is incorrect
because provisioned throughput is related to database or storage services, not model deployment.
? Option C: "Register the model with the Amazon SageMaker Model Registry" is
incorrect because while the model registry helps with model management, it does not make the model accessible for real-time inference.
? Option D: "Grant access to the custom model in Amazon Bedrock" is incorrect
because Bedrock does not directly manage custom model access; it relies on deployed endpoints like those in SageMaker.
AWS AI Practitioner References:
? Amazon SageMaker Endpoints: AWS recommends deploying models to SageMaker endpoints to use them for real-time inference in various applications.

NEW QUESTION 9
Which AWS feature records details about ML instance data for governance and reporting?

A. Amazon SageMaker Model Cards


B. Amazon SageMaker Debugger
C. Amazon SageMaker Model Monitor
D. Amazon SageMaker JumpStart

Answer: A

Explanation:
Amazon SageMaker Model Cards provide a centralized and standardized repository for documenting machine learning models. They capture key details such as
the model's intended use, training and evaluation datasets, performance metrics, ethical considerations, and other relevant information. This documentation
facilitates governance and reporting by ensuring that all stakeholders have access to consistent and comprehensive information about each model. While Amazon
SageMaker Debugger is used for real-time debugging and monitoring during training, and Amazon SageMaker Model Monitor tracks deployed models for data and
prediction quality, neither offers the comprehensive documentation capabilities of Model Cards. Amazon SageMaker JumpStart provides pre-built models and
solutions but does not focus on governance documentation.
Reference: Amazon SageMaker Model Cards

NEW QUESTION 10
A company wants to develop a large language model (LLM) application by using Amazon Bedrock and customer data that is uploaded to Amazon S3. The
company's security policy states that each team can access data for only the team's own customers.
Which solution will meet these requirements?

A. Create an Amazon Bedrock custom service role for each team that has access to only the team's customer data.
B. Create a custom service role that has Amazon S3 acces
C. Ask teams to specify the customer name on each Amazon Bedrock request.
D. Redact personal data in Amazon S3. Update the S3 bucket policy to allow team access to customer data.
E. Create one Amazon Bedrock role that has full Amazon S3 acces
F. Create IAM roles for each team that have access to only each team's customer folders.

Answer: A

Explanation:
To comply with the company's security policy, which restricts each team to access data for only their own customers, creating an Amazon Bedrock custom service
role for each team is the correct solution.
? Custom Service Role Per Team:
? Why Option A is Correct:
? Why Other Options are Incorrect:
Thus, A is the correct answer to meet the company's security requirements.

NEW QUESTION 10
A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.
Which solution will meet these requirements?

A. Deploy optimized small language models (SLMs) on edge devices.


B. Deploy optimized large language models (LLMs) on edge devices.
C. Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
D. Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.

Answer: A

Explanation:
To achieve the lowest latency possible for inference on edge devices, deploying optimized small language models (SLMs) is the most effective solution. SLMs
require fewer
resources and have faster inference times, making them ideal for deployment on edge devices where processing power and memory are limited.
? Option A (Correct): "Deploy optimized small language models (SLMs) on edge

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

devices": This is the correct answer because SLMs provide fast inference with low latency, which is crucial for edge deployments.
? Option B: "Deploy optimized large language models (LLMs) on edge devices" is
incorrect because LLMs are resource-intensive and may not perform well on edge devices due to their size and computational demands.
? Option C: "Incorporate a centralized small language model (SLM) API for
asynchronous communication with edge devices" is incorrect because it introduces network latency due to the need for communication with a centralized server.
? Option D: "Incorporate a centralized large language model (LLM) API for
asynchronous communication with edge devices" is incorrect for the same reason, with even greater latency due to the larger model size.
AWS AI Practitioner References:
? Optimizing AI Models for Edge Devices on AWS: AWS recommends using small, optimized models for edge deployments to ensure minimal latency and efficient
performance.

NEW QUESTION 12
A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated
content aligns with the company's brand voice and messaging requirements.
Which solution meets these requirements?

A. Optimize the model's architecture and hyperparameters to improve the model's overall performance.
B. Increase the model's complexity by adding more layers to the model's architecture.
C. Create effective prompts that provide clear instructions and context to guide the model's generation.
D. Select a large, diverse dataset to pre-train a new generative model.

Answer: C

Explanation:
Creating effective prompts is the best solution to ensure that the content generated by a pre-trained generative AI model aligns with the company's brand voice
and messaging requirements.
? Effective Prompt Engineering:
? Why Option C is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 15
A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and
use an AI model responsibly to minimize bias that could negatively affect some customers.
Which actions should the company take to meet these requirements? (Select TWO.)

A. Detect imbalances or disparities in the data.


B. Ensure that the model runs frequently.
C. Evaluate the model's behavior so that the company can provide transparency to stakeholders.
D. Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate.
E. Ensure that the model's inference time is within the accepted limits.

Answer: AC

Explanation:
To build and use an AI model responsibly, especially in sensitive applications like loan approvals, it's crucial to address potential biases and ensure transparency:
? Detect imbalances or disparities in the data (Option A): Analyzing the training data
for imbalances or disparities is essential. Imbalanced data can lead to models that are biased towards the majority class, potentially disadvantaging certain groups.
By identifying and mitigating these imbalances, the company can reduce the risk of biased predictions.
? Evaluate the model's behavior to provide transparency to stakeholders (Option C):
Regularly assessing the model's outputs and decision-making processes allows the company to understand how decisions are made. This evaluation fosters
transparency, enabling the company to explain model behavior to stakeholders
and ensure that the model operates as intended without unintended biases. Options B, D, and E, while relevant to model performance and evaluation, do not
directly address the responsible use of AI concerning bias and transparency.
Reference: AWS Certified AI Practitioner Exam Guide

NEW QUESTION 17
A company wants to deploy a conversational chatbot to answer customer questions. The chatbot is based on a fine-tuned Amazon SageMaker JumpStart model.
The application must comply with multiple regulatory frameworks.
Which capabilities can the company show compliance for? (Select TWO.)

A. Auto scaling inference endpoints


B. Threat detection
C. Data protection
D. Cost optimization
E. Loosely coupled microservices

Answer: BC

Explanation:
To comply with multiple regulatory frameworks, the company must ensure data protection and threat detection. Data protection involves safeguarding sensitive
customer information, while threat detection identifies and mitigates security threats to the application.
? Option C (Correct): "Data protection": This is correct because data protection is
critical for compliance with privacy and security regulations.
? Option B (Correct): "Threat detection": This is correct because detecting and mitigating threats is essential to maintaining the security posture required for
regulatory compliance.
? Option A: "Auto scaling inference endpoints" is incorrect because auto-scaling does not directly relate to regulatory compliance.
? Option D: "Cost optimization" is incorrect because it is focused on managing expenses, not compliance.
? Option E: "Loosely coupled microservices" is incorrect because this architectural approach does not directly address compliance requirements.
AWS AI Practitioner References:
? AWS Compliance Capabilities: AWS offers services and tools, such as data protection and threat detection, to help companies meet regulatory requirements for

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

security and privacy.

NEW QUESTION 20
A company has installed a security camera. The company uses an ML model to evaluate the security camera footage for potential thefts. The company has
discovered that the model disproportionately flags people who are members of a specific ethnic group.
Which type of bias is affecting the model output?

A. Measurement bias
B. Sampling bias
C. Observer bias
D. Confirmation bias

Answer: B

Explanation:
Sampling bias is the correct type of bias affecting the model output when it disproportionately flags people from a specific ethnic group.
? Sampling Bias:
? Why Option B is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 21
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company wants to classify the sentiment of text passages
as positive or negative.
Which prompt engineering strategy meets these requirements?

A. Provide examples of text passages with corresponding positive or negative labels in the prompt followed by the new text passage to be classified.
B. Provide a detailed explanation of sentiment analysis and how LLMs work in the prompt.
C. Provide the new text passage to be classified without any additional context or examples.
D. Provide the new text passage with a few examples of unrelated tasks, such as text summarization or question answering.

Answer: A

Explanation:
Providing examples of text passages with corresponding positive or negative labels in the prompt followed by the new text passage to be classified is the correct
prompt engineering strategy for using a large language model (LLM) on Amazon Bedrock for sentiment analysis.
? Example-Driven Prompts:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 26
A company has a database of petabytes of unstructured data from internal sources. The company wants to transform this data into a structured format so that its
data scientists can perform machine learning (ML) tasks.
Which service will meet these requirements?

A. Amazon Lex
B. Amazon Rekognition
C. Amazon Kinesis Data Streams
D. AWS Glue

Answer: D

Explanation:
AWS Glue is the correct service for transforming petabytes of unstructured data into a structured format suitable for machine learning tasks.
? AWS Glue:
? Why Option D is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 28
Which functionality does Amazon SageMaker Clarify provide?

A. Integrates a Retrieval Augmented Generation (RAG) workflow


B. Monitors the quality of ML models in production
C. Documents critical details about ML models
D. Identifies potential bias during data preparation

Answer: D

Explanation:
Exploratory data analysis (EDA) involves understanding the data by visualizing it, calculating statistics, and creating correlation matrices. This stage helps identify
patterns, relationships, and anomalies in the data, which can guide further steps in the ML pipeline.
? Option C (Correct): "Exploratory data analysis": This is the correct answer as the
tasks described (correlation matrix, calculating statistics, visualizing data) are all part of the EDA process.
? Option A: "Data pre-processing" is incorrect because it involves cleaning and
transforming data, not initial analysis.
? Option B: "Feature engineering" is incorrect because it involves creating new features from raw data, not analyzing the data's existing structure.
? Option D: "Hyperparameter tuning" is incorrect because it refers to optimizing model parameters, not analyzing the data.
AWS AI Practitioner References:
? Stages of the Machine Learning Pipeline: AWS outlines EDA as the initial phase of understanding and exploring data before moving to more specific
preprocessing, feature engineering, and model training stages.

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

NEW QUESTION 29
A company wants to build an interactive application for children that generates new stories based on classic stories. The company wants to use Amazon Bedrock
and needs to ensure that the results and topics are appropriate for children.
Which AWS service or feature will meet these requirements?

A. Amazon Rekognition
B. Amazon Bedrock playgrounds
C. Guardrails for Amazon Bedrock
D. Agents for Amazon Bedrock

Answer: C

Explanation:
Amazon Bedrock is a service that provides foundational models for building generative AI applications. When creating an application for children, it is crucial to
ensure that the generated content is appropriate for the target audience. "Guardrails" in Amazon Bedrock provide mechanisms to control the outputs and topics of
generated content to align with desired safety standards and appropriateness levels.
? Option C (Correct): "Guardrails for Amazon Bedrock": This is the correct answer
because guardrails are specifically designed to help users enforce content moderation, filtering, and safety checks on the outputs generated by models in Amazon
Bedrock. For a children??s application, guardrails ensure that all content generated is suitable and appropriate for the intended audience.
? Option A: "Amazon Rekognition" is incorrect. Amazon Rekognition is an image and
video analysis service that can detect inappropriate content in images or videos, but it does not handle text or story generation.
? Option B: "Amazon Bedrock playgrounds" is incorrect because playgrounds are
environments for experimenting and testing model outputs, but they do not inherently provide safeguards to ensure content appropriateness for specific audiences,
such as children.
? Option D: "Agents for Amazon Bedrock" is incorrect. Agents in Amazon Bedrock
facilitate building AI applications with more interactive capabilities, but they do not provide specific guardrails for ensuring content appropriateness for children.
AWS AI Practitioner References:
? Guardrails in Amazon Bedrock: Designed to help implement controls that ensure generated content is safe and suitable for specific use cases or audiences, such
as children, by moderating and filtering inappropriate or undesired content.
? Building Safe AI Applications: AWS provides guidance on implementing ethical AI practices, including using guardrails to protect against generating inappropriate
or biased content.

NEW QUESTION 34
A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short
and written in a specific language.
Which solution will align the LLM response quality with the company's expectations?

A. Adjust the prompt.


B. Choose an LLM of a different size.
C. Increase the temperature.
D. Increase the Top K value.

Answer: A

Explanation:
Adjusting the prompt is the correct solution to align the LLM outputs with the company's expectations for short, specific language responses.
? Adjust the Prompt:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 36
A company has terabytes of data in a database that the company can use for business analysis. The company wants to build an AI-based application that can
build a SQL query from input text that employees provide. The employees have minimal experience with technology.
Which solution meets these requirements?

A. Generative pre-trained transformers (GPT)


B. Residual neural network
C. Support vector machine
D. WaveNet

Answer: A

Explanation:
Generative Pre-trained Transformers (GPT) are suitable for building an AI-based application that can generate SQL queries from natural language input provided
by employees.
? GPT for Natural Language Processing:
? Why Option A is Correct:
? Why Other Options are Incorrect:

NEW QUESTION 40
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company
does not need to access the model predictions immediately.
Which Amazon SageMaker inference option will meet these requirements?

A. Batch transform
B. Real-time inference
C. Serverless inference
D. Asynchronous inference

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

Answer: A

Explanation:
Batch transform in Amazon SageMaker is designed for offline processing of large datasets. It is ideal for scenarios where immediate predictions are not required,
and the inference can be done on large datasets that are multiple gigabytes in size. This method processes data in batches, making it suitable for analyzing
archived data without the need for real- time access to predictions.
? Option A (Correct): "Batch transform": This is the correct answer because batch
transform is optimized for handling large datasets and is suitable when immediate access to predictions is not required.
? Option B: "Real-time inference" is incorrect because it is used for low-latency, real-
time prediction needs, which is not required in this case.
? Option C: "Serverless inference" is incorrect because it is designed for small-scale, intermittent inference requests, not for large batch processing.
? Option D: "Asynchronous inference" is incorrect because it is used when immediate predictions are required, but with high throughput, whereas batch transform
is more suitable for very large datasets.
AWS AI Practitioner References:
? Batch Transform on AWS SageMaker: AWS recommends using batch transform for large datasets when real-time processing is not needed, ensuring cost-
effectiveness and scalability.

NEW QUESTION 43
A large retailer receives thousands of customer support inquiries about products every day. The customer support inquiries need to be processed and responded
to quickly. The company wants to implement Agents for Amazon Bedrock.
What are the key benefits of using Amazon Bedrock agents that could help this retailer?

A. Generation of custom foundation models (FMs) to predict customer needs


B. Automation of repetitive tasks and orchestration of complex workflows
C. Automatically calling multiple foundation models (FMs) and consolidating the results
D. Selecting the foundation model (FM) based on predefined criteria and metrics

Answer: B

Explanation:
Amazon Bedrock Agents provide the capability to automate repetitive tasks and orchestrate complex workflows using generative AI models. This is particularly
beneficial for customer support inquiries, where quick and efficient processing is crucial.
? Option B (Correct): "Automation of repetitive tasks and orchestration of complex workflows": This is the correct answer because Bedrock Agents can automate
common customer service tasks and streamline complex processes, improving response times and efficiency.
? Option A: "Generation of custom foundation models (FMs) to predict customer needs" is incorrect as Bedrock agents do not create custom models.
? Option C: "Automatically calling multiple foundation models (FMs) and consolidating the results" is incorrect because Bedrock agents focus on task automation
rather than combining model outputs.
? Option D: "Selecting the foundation model (FM) based on predefined criteria and metrics" is incorrect as Bedrock agents are not designed for selecting models.
AWS AI Practitioner References:
? Amazon Bedrock Documentation: AWS explains that Bedrock Agents automate tasks and manage complex workflows, making them ideal for customer support
automation.

NEW QUESTION 47
A company wants to develop an educational game where users answer questions such as the following: "A jar contains six red, four green, and three yellow
marbles. What is the probability of choosing a green marble from the jar?"
Which solution meets these requirements with the LEAST operational overhead?

A. Use supervised learning to create a regression model that will predict probability.
B. Use reinforcement learning to train a model to return the probability.
C. Use code that will calculate probability by using simple rules and computations.
D. Use unsupervised learning to create a model that will estimate probability density.

Answer: C

Explanation:
The problem involves a simple probability calculation that can be handled efficiently by straightforward mathematical rules and computations. Using machine
learning techniques would introduce unnecessary complexity and operational overhead.
? Option C (Correct): "Use code that will calculate probability by using simple rules and computations": This is the correct answer because it directly solves the
problem with minimal overhead, using basic probability rules.
? Option A: "Use supervised learning to create a regression model" is incorrect as it overcomplicates the solution for a simple probability problem.
? Option B: "Use reinforcement learning to train a model" is incorrect because reinforcement learning is not needed for a simple probability calculation.
? Option D: "Use unsupervised learning to create a model" is incorrect as unsupervised learning is not applicable to this task.
AWS AI Practitioner References:
? Choosing the Right Solution for AI Tasks: AWS recommends using the simplest and most efficient approach to solve a given problem, avoiding unnecessary
machine learning techniques for straightforward tasks.

NEW QUESTION 50
A student at a university is copying content from generative AI to write essays. Which challenge of responsible generative AI does this scenario represent?

A. Toxicity
B. Hallucinations
C. Plagiarism
D. Privacy

Answer: C

Explanation:
The scenario where a student copies content from generative AI to write essays represents the challenge of plagiarism in responsible AI use.
? Plagiarism:
? Why Option C is Correct:

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

? Why Other Options are Incorrect:

NEW QUESTION 53
A pharmaceutical company wants to analyze user reviews of new medications and provide a concise overview for each medication. Which solution meets these
requirements?

A. Create a time-series forecasting model to analyze the medication reviews by using Amazon Personalize.
B. Create medication review summaries by using Amazon Bedrock large language models (LLMs).
C. Create a classification model that categorizes medications into different groups by using Amazon SageMaker.
D. Create medication review summaries by using Amazon Rekognition.

Answer: B

Explanation:
Amazon Bedrock provides large language models (LLMs) that are optimized for natural language understanding and text summarization tasks, making it the best
choice for creating concise summaries of user reviews. Time-series forecasting, classification, and image analysis (Rekognition) are not suitable for summarizing
textual data. References: AWS Bedrock Documentation.

NEW QUESTION 58
A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the
model to production, the model's performance decreased significantly.
What should the company do to mitigate this problem?

A. Reduce the volume of data that is used in training.


B. Add hyperparameters to the model.
C. Increase the volume of data that is used in training.
D. Increase the model training time.

Answer: C

Explanation:
When a model performs well on the training data but poorly in production, it is often due to overfitting. Overfitting occurs when a model learns patterns and noise
specific to the training data, which does not generalize well to new, unseen data in production. Increasing the volume of data used in training can help mitigate this
problem by providing a more diverse and representative dataset, which helps the model generalize better.
? Option C (Correct): "Increase the volume of data that is used in training":
Increasing the data volume can help the model learn more generalized patterns rather than specific features of the training dataset, reducing overfitting and
improving performance in production.
? Option A: "Reduce the volume of data that is used in training" is incorrect, as
reducing data volume would likely worsen the overfitting problem.
? Option B: "Add hyperparameters to the model" is incorrect because adding hyperparameters alone does not address the issue of data diversity or model
generalization.
? Option D: "Increase the model training time" is incorrect because simply increasing training time does not prevent overfitting; the model needs more diverse data.
AWS AI Practitioner References:
? Best Practices for Model Training on AWS: AWS recommends using a larger and more diverse training dataset to improve a model's generalization capability
and reduce the risk of overfitting.

NEW QUESTION 59
A company makes forecasts each quarter to decide how to optimize operations to meet expected demand. The company uses ML models to make these
forecasts.
An AI practitioner is writing a report about the trained ML models to provide transparency and explainability to company stakeholders.
What should the AI practitioner include in the report to meet the transparency and explainability requirements?

A. Code for model training


B. Partial dependence plots (PDPs)
C. Sample data for training
D. Model convergence tables

Answer: B

Explanation:
Partial dependence plots (PDPs) are visual tools used to show the relationship between a feature (or a set of features) in the data and the predicted outcome of a
machine learning model. They are highly effective for providing transparency and explainability of the model's behavior to stakeholders by illustrating how different
input variables impact the model's predictions.
? Option B (Correct): "Partial dependence plots (PDPs)": This is the correct answer because PDPs help to interpret how the model's predictions change with
varying values of input features, providing stakeholders with a clearer understanding of the model's decision-making process.
? Option A: "Code for model training" is incorrect because providing the raw code for model training may not offer transparency or explainability to non-technical
stakeholders.
? Option C: "Sample data for training" is incorrect as sample data alone does not explain how the model works or its decision-making process.
? Option D: "Model convergence tables" is incorrect. While convergence tables can show the training process, they do not provide insights into how input features
affect the model's predictions.
AWS AI Practitioner References:
? Explainability in AWS Machine Learning: AWS provides various tools for model explainability, such as Amazon SageMaker Clarify, which includes PDPs to help
explain the impact of different features on the model??s predictions.

NEW QUESTION 61
A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon
S3 bucket.
The data is encrypted with Amazon S3 managed keys (SSE-S3).
The FM encounters a failure when attempting to access the S3 bucket data. Which solution will meet these requirements?

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

A. Ensure that the role that Amazon Bedrock assumes has permission to decrypt data withthe correct encryption key.
B. Set the access permissions for the S3 buckets to allow public access to enable access over the internet.
C. Use prompt engineering techniques to tell the model to look for information in Amazon S3.
D. Ensure that the S3 data does not contain sensitive information.

Answer: A

Explanation:
Amazon Bedrock needs the appropriate IAM role with permission to access and decrypt data stored in Amazon S3. If the data is encrypted with Amazon S3
managed keys (SSE- S3), the role that Amazon Bedrock assumes must have the required permissions to access and decrypt the encrypted data.
? Option A (Correct): "Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key": This is the correct
solution as it ensures that the AI model can access the encrypted data securely without changing the encryption settings or compromising data security.
? Option B: "Set the access permissions for the S3 buckets to allow public access" is incorrect because it violates security best practices by exposing sensitive
data to the public.
? Option C: "Use prompt engineering techniques to tell the model to look for information in Amazon S3" is incorrect as it does not address the encryption and
permission issue.
? Option D: "Ensure that the S3 data does not contain sensitive information" is incorrect because it does not solve the access problem related to encryption.
AWS AI Practitioner References:
? Managing Access to Encrypted Data in AWS: AWS recommends using proper IAM roles and policies to control access to encrypted data stored in S3.

NEW QUESTION 64
......

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Recommend!! Get the Full AIF-C01 dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/AIF-C01-exam-dumps.html (97 New Questions)

Thank You for Trying Our Product

We offer two products:

1st - We have Practice Tests Software with Actual Exam Questions

2nd - Questons and Answers in PDF Format

AIF-C01 Practice Exam Features:

* AIF-C01 Questions and Answers Updated Frequently

* AIF-C01 Practice Questions Verified by Expert Senior Certified Staff

* AIF-C01 Most Realistic Questions that Guarantee you a Pass on Your FirstTry

* AIF-C01 Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year

100% Actual & Verified — Instant Download, Please Click


Order The AIF-C01 Practice Test Here

Passing Certification Exams Made Easy visit - https://www.surepassexam.com


Powered by TCPDF (www.tcpdf.org)

You might also like