0% found this document useful (0 votes)
416 views43 pages

Data Cloud Certification 2 of 3

The document outlines various questions and answers related to Salesforce Data Cloud, focusing on data source disconnection, attribute name modifications, data ingestion methods, privacy law compliance, and data processing steps. Key points include the necessity of removing dependencies before disconnecting data sources, the use of specific tools for data ingestion, and the importance of following proper sequences for data processing. Additionally, it highlights considerations for managing data streams and configurations across multiple Data Cloud orgs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
416 views43 pages

Data Cloud Certification 2 of 3

The document outlines various questions and answers related to Salesforce Data Cloud, focusing on data source disconnection, attribute name modifications, data ingestion methods, privacy law compliance, and data processing steps. Key points include the necessity of removing dependencies before disconnecting data sources, the use of specific tools for data ingestion, and the importance of following proper sequences for data processing. Additionally, it highlights considerations for managing data streams and configurations across multiple Data Cloud orgs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Question 1Skipped

Which two dependencies need to be removed prior to


disconnecting a data source? (Choose two.)
Activation target
Correct selection
Segment
Correct selection
Data stream
Activation
Overall explanation
Dependencies in Data Cloud:

Before disconnecting a data source, all dependencies must be removed


to prevent data integrity issues.

Identifying Dependencies:

Segment: Segments using data from the source must be deleted or


reassigned.

Data Stream: The data stream must be disconnected, as it directly relies


on the data source.

Steps to Remove Dependencies:

Remove Segments:

Navigate to the Segmentation interface in Salesforce Data Cloud.

Identify and delete segments relying on the data source.

Disconnect Data Stream:

Go to the Data Stream settings.

Locate and disconnect the data stream associated with the source.

Practical Application:

Example: When preparing to disconnect a legacy CRM system, ensure all


segments and data streams using its data are properly removed or
migrated.

Question 2Skipped
How can a consultant modify attribute names to match a naming
convention in Cloud File Storage targets?
Update attribute names in the data stream configuration.
Use a formula field to update the field name in an activation.
Correct answer
Set preferred attribute names when configuring activation.
Update field names in the data model object.
Overall explanation
A Cloud File Storage target is a type of data action target in Data Cloud
that allows sending data to a cloud storage service such as Amazon S3
or Google Cloud Storage. When configuring an activation to a Cloud File
Storage target, a consultant can modify the attribute names to match a
naming convention by setting preferred attribute names in Data Cloud.
Preferred attribute names are aliases that can be used to control the
field names in the target file. They can be set for each attribute in the
activation configuration, and they will override the default field names
from the data model object. The other options are incorrect because
they do not affect the field names in the target file. Using a formula field
to update the field name in an activation will not change the field name,
but only the field value. Updating attribute names in the data stream
configuration will not affect the existing data lake objects or data model
objects. Updating field names in the data model object will change the
field names for all data sources and activations that use the object,
which may not be desirable or consistent. References: Preferred
Attribute Name, Create a Data Cloud Activation Target, Cloud File
Storage Target
Question 3Skipped
Which solution provides an easy way to ingest Marketing Cloud
subscriber profile attributes into Data Cloud on a daily basis?
Marketing Cloud Connect API
Email Studio Starter Data Bundle
Automation Studio and Profile API
Correct answer
Marketing Cloud Data Extension Data Stream
Overall explanation
The solution that provides an easy way to ingest Marketing Cloud
subscriber profile attributes into Data Cloud on a daily basis is the
Marketing Cloud Data extension Data Stream. The Marketing Cloud Data
extension Data Stream is a feature that allows customers to stream data
from Marketing Cloud data extensions to Data Cloud data spaces.
Customers can select which data extensions they want to stream, and
Data Cloud will automatically create and update the corresponding data
model objects (DMOs) in the data space. Customers can also map the
data extension fields to the DMO attributes using a user interface or an
API. The Marketing Cloud Data extension Data Stream can help
customers ingest subscriber profile attributes and other data from
Marketing Cloud into Data Cloud without writing any code or setting up
any complex integrations.

The other options are not solutions that provide an easy way to ingest
Marketing Cloud subscriber profile attributes into Data Cloud on a daily
basis. Automation Studio and Profile file API are tools that can be used to
export data from Marketing Cloud to external systems, but they require
customers to write scripts, configure file transfers, and schedule
automations. Marketing Cloud Connect API is an API that can be used to
access data from Marketing Cloud in other Salesforce solutions, such as
Sales Cloud or Service Cloud, but it does not support streaming data to
Data Cloud. Email Studio Starter Data Bundle is a data kit that contains
sample data and segments for Email Studio, but it does not contain
subscriber profile attributes or stream data to Data Cloud.

Question 4Skipped
During a privacy law discussion with a customer, the customer
indicates they need to honor requests for the right to be
forgotten. The consultant determines that Consent API will solve
this business need.

Which two considerations should the consultant inform the


customer about? (Choose two.)
Correct selection
Data deletion requests are submitted for Individual profiles.
Data deletion requests submitted to Data Cloud are passed to
all connected Salesforce clouds.
Correct selection
Data deletion requests are reprocessed at 30, 60, and 90 days.
Data deletion requests are processed within 1 hour.
Overall explanation
A. Data deletion requests are submitted for Individual profiles.
 This is correct because the Consent API in Salesforce Data Cloud
allows for handling data deletion requests specifically for Individual
profiles, ensuring compliance with privacy regulations like the right
to be forgotten.

C. Data deletion requests are reprocessed at 30, 60, and 90


days.

This is correct because Salesforce processes data deletion


requests periodically (at 30, 60, and 90 days) to ensure that all
data, including any residual data, is completely removed from the
system.
Question 5Skipped
A user is not seeing suggested values from newly-modeled data
when building a segment.

What is causing this issue?


Value suggestion will only return results for the first 50 values
of a specific attribute.
Value suggestion can only work on direct attributes and not
related attributes.
Value suggestion requires Data Aware Specialist permissions at
a minimum.
Correct answer
Value suggestion is still processing and takes up to 24 hours to
be available.
Overall explanation
The most likely cause of this issue is that value suggestion is still
processing and takes up to 24 hours to be available. Value suggestion is
a feature that enables you to see suggested values for data model
object (DMO) fields when creating segment filters.However, this feature
needs to be enabled for each DMO field, and it can take up to 24 hours
for the suggested values to appear after enabling the feature1.
Therefore, if a user is not seeing suggested values from newly-modeled
data, it could be that the data has not been processed yet by the value
suggestion feature

Question 6Skipped
Northern Trail Outfitters uploads new customer data to an
Amazon S3 Bucket on a daily basis to be ingested in Data Cloud.

In what order should each process be run to ensure that freshly


imported data is ready and available to use for any segment?
Identity Resolution > Refresh Data Stream > Calculated Insight
Calculated Insight > Refresh Data Stream > Identity Resolution
Refresh Data Stream > Calculated Insight > Identity Resolution
Correct answer
Refresh Data Stream > Identity Resolution > Calculated Insight
Overall explanation
To ensure that freshly imported data from an Amazon S3 Bucket is ready
and available to use for any segment, the following processes should be
run in this order:

 Refresh Data Stream: This process updates the data lake


objects in Data Cloud with the latest data from the source system.
It can be configured to run automatically or manually, depending
on the data stream settings1. Refreshing the data stream ensures
that Data Cloud has the most recent and accurate data from the
Amazon S3 Bucket.

 Identity Resolution: This process creates unified individual


profiles by matching and consolidating source profiles from
different data streams based on the identity resolution ruleset. It
runs daily by default, but can be triggered manually as well2.
Identity resolution ensures that Data Cloud has a single view of
each customer across different data sources.

 Calculated Insight: This process performs calculations on data


lake objects or CRM data and returns a result as a new data object.
It can be used to create metrics or measures for segmentation or
analysis purposes3. Calculated insights ensure that Data Cloud has
the derived data that can be used for personalization or activation.

Question 7Skipped
What is the first step to set up and configure a Data Cloud
instance after it has been provisioned?
Connect to the Salesforce CRM org that Data Cloud is
provisioned in.
Complete the Salesforce Data Cloud "Get Started" process.
Correct answer
Enable the Data Cloud Admin permission set to the relevant
Salesforce CRM user.
Connect to the Marketing Cloud Account that Data Cloud is
provisioned in.
Overall explanation
The first step in setting up a Salesforce Data Cloud instance after
provisioning is to grant the Data Cloud Admin permission set to the
relevant Salesforce CRM user. This step ensures that the user has the
necessary permissions to access and configure the Data Cloud instance,
including setting up connections, data streams, and managing
configurations.
Question 8Skipped
What are two benefits of calculated insights over segmentation
criteria? (Choose two.)
Correct selection
Calculated insights results can be included as attributes in
activation.
Calculated insights refresh automatically upon reference in
segments and activation.
Correct selection
Calculated insights can query engagement data greater than 2
years.
Calculated insights are better suited for single row based
operation.
Overall explanation
1. A. Calculated insights results can be included as attributes
in activation: Calculated insights allow you to generate metrics
or derived values (e.g., customer lifetime value or average
purchase frequency) that can then be included in the activation
process as attributes, making them highly actionable.
2. C. Calculated insights can query engagement data greater
than 2 years: Unlike segmentation criteria, which may be
constrained by data limits or predefined filters, calculated insights
can perform more complex queries on historical data, including
engagement data older than 2 years, depending on data retention
policies.
Question 9Skipped
Northern Trail Outfitters uses B2C Commerce and is exploring
implementing Data Cloud to get a unified view of its customers
and all their order transactions.

What should the consultant keep in mind with regard to


historical data when ingesting order data using the B2C
Commerce Order Bundle?
The B2C Commerce Order Bundle does not ingest any historical
data and only ingests new orders from that point on.
Correct answer
The B2C Commerce Order Bundle ingests 30 days of historical
data.
The B2C Commerce Order Bundle ingests 6 months of historical
data.
The B2C Commerce Order Bundle ingests 12 months of historical
data.
Overall explanation
When you establish a connection between B2C Commerce and Data
Cloud, the B2C Commerce Connector ingests 30 days of historical data.
For example, if you establish a connection on May 1, the B2C Commerce
Connector ingests historical data from April 2 to May 1. From this point
forward, the B2C Commerce Connector continues to ingest data.
Question 10Skipped
Cloud Kicks plans to do a full deletion of one of its existing data
streams and its underlying data lake object (DLO).

What should the consultant consider before deleting the data


stream?

Correct answer
The underlying DLO can be used in a data transform.
The underlying DLO cannot be mapped to a data model object.
The data stream must be associated with a data kit.
The data stream can be deleted without implicitly deleting the
underlying DLO.
Overall explanation
Data Streams and DLOs: In Salesforce Data Cloud, data streams are
usedto ingest data, which is then stored in Data Lake Objects
(DLOs).Deletion Considerations: Before deleting a data stream, it's
crucial to consider thedependencies and usage of the underlying
DLO.Data Transform Usage:Impact of Deletion: If the underlying DLO is
used in a data transform, deleting thedata stream will affect any
transforms relying on that DLO. Dependency Check: Ensure that the DLO
is not part of any active datatransformations or processes that could be
disrupted by its deletion.References:Salesforce Data Cloud
Documentation: Data StreamsSalesforce Data Cloud Documentation:
Data Transforms
Question 11Skipped
Cumulus Financial needs to create a composite key on an
incoming data source that combines the fields Customer Region
and Customer Identifier.

Which formula function should a consultant use to create a


composite key when a primary key is not available in a data
stream?
CAST
Correct answer
CONCAT
COALESCE
COMBINE
Overall explanation
The CONCAT function is used to combine multiple fields into a single
composite value. In this case, to create a composite key from the
fields Customer Region and Customer Identifier, the consultant
would use CONCAT(Customer Region, Customer Identifier).
Question 12Skipped
A consultant is reviewing a recent activation using engagement-
based related attributes but is not seeing any related attributes
in their payload for the majority of their segment members.

Which two areas should the consultant review to help


troubleshoot this issue?

Choose 2 answers

Correct selection
The related engagement events occurred within the last 90
days.
The activations are referencing segments that segment on
profile data rather than engagement data.
Correct selection
The correct path is selected for the related attributes.
The activated profiles have a Unified Contact Point.
Overall explanation
Engagement-based related attributes are attributes that describe the
interactions of a person with an email message, such as opens, clicks,
unsubscribes, etc. These attributes are stored in the Engagement data
model object (DMO) and can be added to an activation to send more
personalized communications. However, there are some considerations
and limitations when using engagement-based related attributes, such
as:

 For engagement data, activation supports a 90-day lookback


window. This means that only the attributes from the engagement
events that occurred within the last 90 days are considered for
activation. Any records outside of this window are not included in
the activation payload. Therefore, the consultant should review the
event time of the related engagement events and make sure they
are within the lookback window.

 The correct path to the related attributes must be selected for the
activation. A path is a sequence of DMOs that are connected by
relationships in the data model. For example, the path from
Individual to Engagement is Individual -> Email -> Engagement.
The path determines which related attributes are available for
activation and how they are filtered. Therefore, the consultant
should review the path selection and make sure it matches the
desired related attributes and filters.
The other two options are not relevant for this issue. The activations can
reference segments that segment on profile data rather than
engagement data, as long as the activation target supports related
attributes. The activated profiles do not need to have a Unified Contact
Point, which is a unique identifier for a person across different data
sources, to activate engagement-based related attributes. References:
Add Related Attributes to an Activation, Related Attributes in Data Cloud
activation have no values, Explore the Engagement Data Model Object

Question 13Skipped
How does identity resolution select attributes for unified
individuals when there Is conflicting information in the data
model?
Creates additional contact points
Correct answer
Leverages reconciliation rules
Creates additional rulesets
Leverages match rules
Overall explanation
When there is conflicting information in the data model, reconciliation
rules are used during the Identity Resolution process to determine
which attributes to select for the unified individual. Reconciliation rules
help prioritize and resolve conflicts by specifying which data sources or
attributes take precedence, ensuring the most accurate and consistent
information is used.
 A. Creates additional contact points: This is not relevant to
resolving conflicts in attributes.
 C. Creates additional rulesets: Rulesets define how matching
occurs but do not resolve attribute conflicts.
 D. Leverages match rules: Match rules identify and link records
across data sources but do not handle conflicts in attribute values.
Question 14Skipped
A Data Cloud consultant is in the process of setting up data
streams for a new service-based data source

When ingesting Case data, which field is recommended to be


associated with the Event Time field?

Last Modified Date


Escalation Date
Resolution Date
Correct answer
Creation Date
Overall explanation
The Event Time field in Salesforce Data Cloud represents the
timestamp when an event or record occurred. For case data,
the Creation Date is the most appropriate field to associate with the
Event Time, as it reflects when the case was first created in the system.
This ensures accurate time-based segmentation, reporting, and insights.
Question 15Skipped
A customer has two Data Cloud orgs. A new configuration has
been completed and tested for an Amazon S3 data stream and
its mappings in one of the Data Cloud orgs.

What is recommended to package and promote this


configuration to the customer's second org?
Package as an AppExchange application.
Correct answer
Create a data kit.
Use the Metadata API.
Use the Salesforce CRM connector.
Overall explanation
Data Cloud Configuration Promotion: When managing configurations
across multiple Salesforce Data Cloud orgs, it's essential to use tools
that ensure consistency and accuracy in the promotion process.

Data Kits: Salesforce Data Cloud allows users to package and promote
configurations using data kits. These kits encapsulate data stream
definitions, mappings, and other configuration elements into a portable
format.

Process:

Create a data kit in the source org that includes the Amazon S3 data
stream configuration and mappings.

Export the data kit from the source org.

Import the data kit into the target org, ensuring that all configurations
are transferred accurately.

Advantages: Using data kits simplifies the migration process, reduces


the risk of configuration errors, and ensures that all settings and
mappings are consistently applied in the new org.

Question 16Skipped
Northern Trail Outfitters (NTO) is configuring an identity
resolution ruleset based on Fuzzy Name and Normalized Email.
What should NTO do to ensure the best email address is
activated?

Correct answer
Use the source priority order in activations to make sure a
contact point from the desired source is delivered to the
activation target.
Set the default reconciliation rule to Last Updated.
Ensure Marketing Cloud is prioritized as the first data source in
the Source Priority reconciliation rule.
Include Contact Point Email object Is Active field as a match
rule.
Overall explanation
NTO is using Fuzzy Name and Normalized Email as match rules to link
together data from different sources into a unified individual profile.
However, there might be cases where the same email address is
available from more than one source, and NTO needs to decide which
one to use for activation. For example, if Rachel has the same email
address in Service Cloud and Marketing Cloud, but prefers to receive
communications from NTO via Marketing Cloud, NTO needs to ensure
that the email address from Marketing Cloud is activated. To do this,
NTO can use the source priority order in activations, which allows them
to rank the data sources in order of preference for activation. By placing
Marketing Cloud higher than Service Cloud in the source priority order,
NTO can make sure that the email address from Marketing Cloud is
delivered to the activation target, such as an email campaign or a
journey. This way, NTO can respect Rachel’s preference and deliver a
better customer experience. References: Configure Activations, Use
Source Priority Order in Activations
Question 17Skipped
A customer needs to integrate in real time with Salesforce CRM.

Which feature accomplishes this requirement?


Data model triggers
Data actions and Lightning web components
Correct answer
Streaming transforms
Sales and Service bundle
Overall explanation
Streaming transforms allow for real-time data processing and
integration. They are specifically designed for handling and transforming
data in real time as it streams into Salesforce CRM. This is ideal for
situations where you need real-time integration with external systems or
need to act on data as it arrives.
Question 18Skipped
Northern Trail Outfitters (NTO), an outdoor lifestyle clothing
brand, recently started a new line of business. The new business
specializes in gourmet camping food. For business reasons as
well as security reasons, it's important to NTO to keep all Data
Cloud data separated by brand.

Which capability best supports NTO's desire to separate its data


by brand?

Data sources for each brand


Data model objects for each brand
Correct answer
Data spaces for each brand
Data streams for each brand
Overall explanation
Data spaces are logical containers that allow you to separate and
organize your data by different criteria, such as brand, region, product,
or business unit. Data spaces can help you manage data access,
security, and governance, as well as enable cross-cloud data integration
and activation. For NTO, data spaces can support their desire to
separate their data by brand, so that they can have different data
models, rules, and insights for their outdoor lifestyle clothing and
gourmet camping food businesses. Data spaces can also help NTO
comply with any data privacy and security regulations that may apply to
their different brands. The other options are incorrect because they do
not provide the same level of data separation and organization as data
spaces. Data streams are used to ingest data from different sources into
Data Cloud, but they do not separate the data by brand. Data model
objects are used to define the structure and attributes of the data, but
they do not isolate the data by brand. Data sources are used to identify
the origin and type of the data, but they do not partition the data by
brand.
Question 19Skipped
A consultant is planning the ingestion of a data stream that has
profile information including a mobile phone number.

To ensure that the phone number can be used for future SMS
campaigns, they need to confirm the phone number field is in
the proper E164 Phone Number format. However, the phone
numbers in the file appear to be in varying formats.

What is the most efficient way to guarantee that the various


phone number formats are standardized?

Correct answer
Create a formula field to standardize the format
Create a calculated insight after ingestion.
Edit and update the data in the source system prior to sending
to Data Cloud.
Assign the PhoneNumber field type when creating the data
stream.
Overall explanation
The formula field is the most efficient way to standardize the phone
number format during the data ingestion process. By using a formula,
you can apply logic to transform different phone number formats into
the standard E164 format. This ensures that all phone numbers are
consistent before they are used for campaigns, especially for SMS.

For example, the formula can parse the phone number and reformat it to
match the E164 format, which typically includes a plus sign (+) followed
by the country code and the phone number (e.g., +1234567890).

Question 20Skipped
An organization wants to enable users with the ability to
identify and select text attributes from a picklist of options.

Which Data Cloud feature should help with this use case?
Correct answer
Value suggestion
Data harmonization
Global picklist
Transformation formulas
Overall explanation
Value suggestion is a Data Cloud feature that allows users to see and
select the possible values for a text field when creating segment filters.
Value suggestion can be enabled or disabled for each data model object
(DMO) field in the DMO record home. Value suggestion can help users to
identify and select text attributes from a picklist of options, without
having to type or remember the exact values. Value suggestion can also
reduce errors and improve data quality by ensuring consistent and valid
values for the segment filters. References: Use Value Suggestions in
Segmentation, Considerations for Selecting Related Attributes
Question 21Skipped
Where is value suggestion for attributes in segmentation
enabled when creating the DMO?
Data Mapping
Data Transformation
Correct answer
Segment Setup
Data Stream Setup
Overall explanation
You can enable value suggestions for data model object (DMO) fields if
the data type is Text. Enable or deactivate the feature in the DMO record
home. You can enable value suggestions for up to 500 attributes for your
entire org. It can take up to 24 hours for suggested values to appear.
1. In Data Cloud, click the Segments tab.
2. Select a segment.
3. To use value suggestion when creating segment filters, drag the
attribute onto the canvas.
4. Start typing in the Value field for an attribute.If you don’t see the
values in your list yet, start typing the values you’ve already
ingested or type values exactly as they appear in your dataset.

Suggested values are retained for 30 days in the suggested values


dataset. For attributes with more than 1,000 values, the most
frequently occurring 1,000 values in the dataset are displayed
alphabetically. This includes both new and old suggested values.
Some operators allow you to select multiple values for an
attribute. Values with more than 255 characters aren’t available as
suggested values, but you can still type them in to filter on them.

https://help.salesforce.com/s/articleView?
id=sf.c360_a_suggested_values_in_segments.htm&type=5

Question 22Skipped
A consultant needs to package Data Cloud components from one
organization to another.

Which two Data Cloud components should the consultant include


in a data kit to achieve this goal?

Choose 2 answers

Correct selection
Data model object
Correct selection
Segments
Calculated insights
Identity resolution rulesets
Overall explanation
A. Data model objects
 Data model objects define the structure and relationships of the
data in Data Cloud. These need to be included in the data kit to
replicate the data model in another organization.

B. Segments

 Segments define specific groups of individuals or entities based on


certain criteria. These are often critical for activations and
campaigns, so they must be included in the data kit to maintain
consistency in segmentation logic.
Question 23Skipped
Which statement about Data Cloud's Web and Mobile Application
Connector is true?
Correct answer
The Tenant Specific Endpoint is auto-generated in Data Cloud
when setting the connector.
Any data streams associated with the connector will be
automatically deleted upon deleting the app from Data Cloud
Setup.
A standard schema containing event, profile, and transaction
data is created at the time the connector is configured.
The connector schema can be updated to delete an existing
field.
Overall explanation
The Web and Mobile Application Connector allows you to ingest data
from your websites and mobile apps into Data Cloud. To use this
connector, you need to set up a Tenant Specific Endpoint (TSE) in Data
Cloud, which is a unique URL that identifies your Data Cloud org. The
TSE is auto-generated when you create a connector app in Data Cloud
Setup. You can then use the TSE to configure the SDKs for your websites
and mobile apps, which will send data to Data Cloud through the TSE.
References: Web and Mobile Application Connector, Connect Your
Websites and Mobile Apps, Create a Web or Mobile App Data Stream
Question 24Skipped
A consultant needs to publish segment data to the Audience
DMO that can be retrieved using the Query APIs.

When creating the activation target, which type of target should


the consultant select?
External Activation Target
Marketing Cloud
Marketing Cloud Personalization
Correct answer
Data Cloud
Overall explanation
Create an activation target in Data Cloud to publish segments to the
Audience DMO. You can access data from segments without a direct
connection to an internal or external target system by publishing the
data to the Audience DMO. You can then retrieve the data using Query
APIs, similar to querying any other DMO.
1. On the Activation Targets tab, click New.
2. Select Data Cloud, and click Next.
3. Enter a unique, easily recognizable name.
4. Enter a short description and select a data space.
5. Click Save.

Your Data Cloud activation target is created.

1. https://help.salesforce.com/s/
articleViewid=sf.c360_a_create_dc_audience_dmo_activation_targ
et.htm&type=5
Question 25Skipped
A consultant notices that the unified individual profile is not
storing the latest email address.

Which action should the consultant take to troubleshoot this


issue?
Correct answer
Confirm that the reconciliation rules are correctly used.
Check if the mapping of DLO objects is correct to Contact Point
Email.
Remove any old email addresses from Salesforce CRM.
Verify and update the email address in the source systems if
needed.
Overall explanation
The issue of the unified individual profile not storing the latest email
address could be due to the reconciliation rules not being configured
correctly. Reconciliation rules determine how multiple data sources
are merged into a single profile. If the rules are incorrect, the system
might not prioritize or correctly update the latest email address from the
incoming data.
Question 26Skipped
A company wants to test its marketing campaigns with different
target populations.

What should the consultant adjust in the Segment Canvas


interface to get different populations?

Population filters and direct attributes


Segmentation filters, direct attributions, and data sources
Correct answer
Direct attributes, related attributes, and population filters
Direct attributes and related attributes
Overall explanation
In the Segment Canvas interface, to test different target populations
for marketing campaigns, the consultant should adjust the following
components:
 Direct attributes: These are attributes directly tied to the
individual profiles, such as demographics or transactional data.
 Related attributes: These are attributes that are linked to the
profile but not directly part of it, like data coming from other
related entities or interactions.
 Population filters: These filters allow you to narrow down the
population based on specific criteria or conditions, ensuring that
only the relevant profiles are included in the segment.

Together, these adjustments allow for creating distinct target


populations based on various combinations of data, attributes, and
filtering criteria.

Question 27Skipped
Northern Trail Outfitters wants to be able to calculate each
customer's lifetime value (LTV) but also create breakdowns of
the revenue sourced by website, mobile app, and retail
channels.

What should a consultant use to address this use case in Data


Cloud?
Correct answer
Metrics on metrics
Nested segments
Streaming data transform
Flow Orchestration
Overall explanation
To calculate lifetime value (LTV) and also create breakdowns of
revenue sourced by different channels (website, mobile app, and
retail), Metrics on metrics is the most suitable solution in Data Cloud.

This allows the creation of custom metrics that can aggregate data
based on multiple dimensions (such as channels) and calculate a metric
like LTV, while also providing insights into how the revenue is distributed
across the different channels.

Question 28Skipped
Cloud Kicks received a Request to be Forgotten by a customer.
In which two ways should a consultant use Data Cloud to honor
this request? (Choose two.)
Correct selection
Use the Consent API to suppress processing and delete the
individual and related records from source data streams.
Use Data Explorer to locate and manually remove the Individual.
Delete the data from the incoming data stream and perform a
full refresh.
Correct selection
Add the Individual ID to a headerless file and use the delete
from file functionality.
Overall explanation
1. A. Use the Consent API to suppress processing and delete
the individual and related records from source data
streams: The Consent API is specifically designed to handle
privacy requests, such as the "right to be forgotten." It enables the
suppression of data processing and ensures that the individual's
data is deleted from the data streams, adhering to privacy
regulations like GDPR.
2. D. Add the Individual ID to a headerless file and use the
delete from file functionality: This method involves creating
a headerless file that includes the individual’s ID and using
the delete from file functionality to remove the data. This is an
alternative method for deletion when an individual’s data needs to
be removed from the system in bulk or for specific privacy
requests.
Question 29Skipped
Which two requirements must be met for a calculated insight to
appear in the segmentation canvas?

Choose 2 answers

The metrics of the calculated insights must only contain numeric


values.
The primary key of the segmented table must be a metric in the
calculated insight.
Correct selection
The calculated insight must contain a dimension including the
Individual or Unified Individual Id.
Correct selection
The primary key of the segmented table must be a dimension in
the calculated insight.
Overall explanation
A calculated insight is a custom metric or measure that is derived from
one or more data model objects or data lake objects in Data Cloud. A
calculated insight can be used in segmentation to filter or group the data
based on the calculated value. However, not all calculated insights can
appear in the segmentation canvas. There are two requirements that
must be met for a calculated insight to appear in the segmentation
canvas:

The calculated insight must contain a dimension including the Individual


or Unified Individual Id. A dimension is a field that can be used to
categorize or group the data, such as name, gender, or location. The
Individual or Unified Individual Id is a unique identifier for each individual
profile in Data Cloud. The calculated insight must include this dimension
to link the calculated value to the individual profile and to enable
segmentation based on the individual profile attributes. The primary key
of the segmented table must be a dimension in the calculated insight.
The primary key is a field that uniquely identifies each record in a table.
The segmented table is the table that contains the data that is being
segmented, such as the Customer or the Order table. The calculated
insight must include the primary key of the segmented table as a
dimension to ensure that the calculated value is associated with the
correct record in the segmented table and to avoid duplication or
inconsistency in the segmentation results

Question 30Skipped
A customer wants to use the transactional data from their data
warehouse in Data Cloud.

They are only able to export the data via an SFTP site.

How should the file be brought into Data Cloud?

Correct answer
Ingest the file with the SFTP Connector.
Ingest the file through the Cloud Storage Connector.
Manually import the file using the Data Import Wizard.
Use Salesforce's Dataloader application to perform a bulk
upload from a desktop.
Overall explanation
The SFTP Connector is a data source connector that allows Data Cloud to
ingest data from an SFTP server.
The customer can use the SFTP Connector to create a data stream from
their exported file and bring it into Data Cloud as a data lake object. The
other options are not the best ways to bring the file into Data Cloud
because:

* B. The Cloud Storage Connector is a data source connector that allows


Data Cloud to ingest data from cloud storage services such as Amazon
S3, Azure Storage, or Google Cloud Storage. The customer does not
have their data in any of these services, but only on an SFTP site.

* C. The Data Import Wizard is a tool that allows users to import data for
many standard Salesforce objects, such as accounts, contacts, leads,
solutions, and campaign members. It is not designed to import data from
an SFTP site or for custom objects in Data Cloud.

* D. The Dataloader is an application that allows users to insert, update,


delete, or export Salesforce records. It is not designed to ingest data
from an SFTP site or into Data Cloud. References: SFTP Connector -
Salesforce, Create Data Streams with the SFTP Connector in Data Cloud -
Salesforce, Data Import Wizard - Salesforce, Salesforce Data Loader

Question 31Skipped
A retailer wants to unify profiles using Loyalty ID which is
different than the unique ID of their customers.

Which object should the consultant use in identity resolution to


perform exact match rules on the Loyalty ID?

Correct answer
Party Identification object
Loyalty Identification object
Individual object
Contact Identification object
Overall explanation
The Party Identification object is the correct object to use in identity
resolution to perform exact match rules on the Loyalty ID. The Party
Identification object is a child object of the Individual object that stores
different types of identifiers for an individual, such as email, phone,
loyalty ID, social media handle, etc. Each identifier has a type, a value,
and a source. The consultant can use the Party Identification object to
create a match rule that compares the Loyalty ID type and value across
different sources and links the corresponding individuals.

The other options are not correct objects to use in identity resolution to
perform exact match rules on the Loyalty ID. The Loyalty Identification
object does not exist in Data Cloud. The Individual object is the parent
object that represents a unified profile of an individual, but it does not
store the Loyalty ID directly. The Contact Identification objectis a child
object of the Contact object that stores identifiers for a contact, such as
email, phone, etc., but it does not store the Loyalty

Question 32Skipped
Which two steps should a consultant take if a successfully
configured Amazon S3 data stream fails to refresh with a "NO
FILE FOUND" error message?
Choose 2 answers
Correct selection
Check if correct permissions are configured for the Data Cloud
user.
Check if the Amazon S3 data source is enabled in Data Cloud
Setup.
Correct selection
Check If the file exists in the specified bucket location.
Check if correct permissions are configured for the S3 user.
Overall explanation
A. Check if correct permissions are configured for the Data
Cloud user.
 The Data Cloud user needs the correct permissions to access
and retrieve files from the Amazon S3 bucket.
 If the user lacks the necessary permissions, Data Cloud won't be
able to detect or fetch the files, leading to the "NO FILE
FOUND" error.

C. Check if the file exists in the specified bucket location.

 The error message "NO FILE FOUND" suggests that Data Cloud is
unable to locate the file in the expected S3 bucket.
 Ensure that the file exists in the correct bucket and folder path
that was specified in the data stream configuration.
Question 33Skipped
During discovery, which feature should a consultant highlight
for a customer who has multiple data sources and needs to
match and reconcile data about individuals into a single unified
profile?
Harmonization
Data Cleansing
Data Consolidation
Correct answer
Identity Resolution
Overall explanation
Identity Resolution is the feature that allows you to match and reconcile
data about individuals from multiple data sources into a single unified
profile. This is crucial when a customer has multiple data sources and
needs to merge or link data to create a comprehensive and accurate
profile for each individual.
Question 34Skipped
A consultant is working in a customer's Data Cloud org and is
asked to delete the existing identity resolution ruleset.

Which two impacts should the consultant communicate as a


result of this action?

Choose 2 answers

All individual data will be removed.


Correct selection
Unified customer data associated with this ruleset will be
removed.
Correct selection
Dependencies on data model objects will be removed.
All source profile data will be removed
Overall explanation
Deleting an identity resolution ruleset has two major impacts that the
consultant should communicate to the customer. First, it will
permanently remove all unified customer data that was created by the
ruleset, meaning that the unified profiles and their attributes will no
longer be available in Data Cloud1. Second, it will eliminate
dependencies on data model objects that were used by the ruleset,
meaning that the data model objects can be modified or deleted without
affecting the ruleset1. These impacts can have significant consequences
for the customer's data quality, segmentation, activation, and analytics,
so the consultant should advise the customer to carefully consider the
implications of deleting a ruleset before proceeding. The other options
are incorrect because they are not impacts of deleting a ruleset. Option
A is incorrect because deleting a ruleset will not remove all individual
data, but only the unified customer data. The individual data from the
source systems will still be available in Data Cloud1. Option D is
incorrect because deleting a ruleset will not remove all source profile
data, but only the unified customer data. The source profile data from
the data streams will still be available in Data Cloud1. References:
Delete an Identity Resolution Ruleset

https://help.salesforce.com/s/articleView?
id=sf.c360_a_identity_resolution_deletion.htm&type=5

Question 35Skipped
Which consideration related to the way Data Cloud ingests CRM
data is true?
CRM data cannot be manually refreshed and must wait for the
next scheduled synchronization,
The CRM Connector's synchronization times can be customized
to up to 15-minute intervals.
Correct answer
Formula fields are refreshed at regular sync intervals and are
updated at the next full refresh.
The CRM Connector allows standard fields to stream into Data
Cloud in real time
Overall explanation
Formula fields are refreshed at regular sync intervals and are
updated at the next full refresh.

This is true because in Data Cloud, formula fields from CRM systems are
updated during regular synchronization intervals, and changes are
reflected at the next full refresh.

Other options are incorrect for the following reasons:

 A. CRM data can be manually refreshed, depending on the setup


and the tools used.
 B. CRM Connector synchronization intervals are typically not
customizable to such short intervals; they are usually longer.
 D. Standard fields do not stream into Data Cloud in real time via
the CRM Connector; data is generally synced in intervals, not in
real time.
Question 36Skipped
To import campaign members into a campaign in Salesforce
CRM, a user wants to export the segment to Amazon S3. The
resulting file needs to include the Salesforce CRM Campaign ID
in the name.

What are two ways to achieve this outcome?


Choose 2 answers

Correct selection
Include campaign identifier in the activation name.
Hard code the campaign identifier as a new attribute in the
campaign activation.
Correct selection
Include campaign identifier in the filename specification.
Include campaign identifier in the segment name.
Overall explanation
The two ways to achieve this outcome are A and C. Include campaign
identifier in the activation name and include campaign identifier in the
filename specification. These two options allow the user to specify the
Salesforce CRM Campaign ID in the name of the file that is exported to
Amazon S3. The activation name and the filename specification are both
configurable settings in the activation wizard, where the user can enter
the campaign identifier as a text or a variable. The activation name is
used as the prefix of the filename, and the filename specification is used
as the suffix of the filename. For example, if the activation name is

"Campaign_123" and the filename specification is


"{segmentName}_{date}", the resulting file name will be

"Campaign_123_SegmentA_2023-12-18.csv". This way, the user can


easily identify the file that corresponds to the campaign and import it
into Salesforce CRM.

The other options are not correct. Option B is incorrect because hard
coding the campaign identifier as a new attribute in the campaign
activation is not possible. The campaign activation does not have any
attributes, only settings. Option D is incorrect because including the
campaign identifier in the segment name is not sufficient.

The segment name is not used in the filename of the exported file,
unless it is specified in the filename specification. Therefore, the user
will not be able to see the campaign identifier in the file name.

Question 37Skipped
A Data Cloud customer wants to adjust their identity resolution
rules to increase their accuracy of matches. Rather than
matching on email address, they want to review a rule that joins
their CRM Contacts with their Marketing Contacts, where both
use the CRM ID as their primary key.
Which two steps should the consultant take to address this new
use case?

Choose 2 answers

Correct selection
Map the primary key from the two systems to Party
Identification, using CRM ID as the identification name for both.
Map the primary key from the two systems to party
identification, using CRM ID as the identification name for
individuals coming from the CRM, and Marketing ID as the
identification name for individuals coming from the marketing
platform.
Create a custom matching rule for an exact match on the
Individual ID attribute.
Correct selection
Create a matching rule based on party identification that
matches on CRM ID as the party identification name.
Overall explanation
1. A: Mapping the CRM ID from both systems to the same Party
Identification ensures consistency and enables matching based on
a unified identifier. This is critical for identity resolution when the
same primary key (CRM ID) exists across data sources.
2. D: Creating a matching rule based on Party Identification using
CRM ID ensures that individuals are matched accurately across the
CRM and Marketing systems using the primary key.
Question 38Skipped
When performing segmentation or activation, which time zone is
used to publish and refresh data?
Time zone specified on the activity at the time of creation
Time zone of the user creating the activity
Time zone of the Data Cloud Admin user
Correct answer
Time zone set by the Salesforce Data Cloud org
Overall explanation
When performing segmentation or activation, the time zone configured
at the Salesforce Data Cloud org level is used for publishing and
refreshing data. This ensures consistency across all activities, regardless
of the time zone of individual users or activities.
Question 39Skipped
Cloud Kicks wants to be able to build a segment of customers
who have visited its website within the previous7 days.
Which filter operator on the Engagement Date field fits this use
case?

Is Between
Greater than Last Number of
Next Number of Days
Correct answer
Last Number of Days
Overall explanation
The "Last Number of Days" filter operator is ideal for creating a
segment of customers who have visited the website within the previous
7 days. It allows you to specify a dynamic range, such as "Last 7 Days,"
to include customers based on engagement activity in that timeframe.
Question 40Skipped
A retail customer wants to bring customer data from different
sources and wants to take advantage of Identity Resolution so
that it can be used in Segmentation. On which entity should this
be segmented for activation membership?
Subscriber
Unified Contact
Correct answer
Unified Individual
Individual
Overall explanation
In Salesforce Data Cloud, segmentation for activation membership
should be based on the Unified Individual entity. This entity
represents a consolidated view of a person after applying Identity
Resolution rules, combining data from different sources into a single,
unified profile.
 A. Subscriber: Typically relates to specific marketing contexts
like email or messaging, not the full unified profile.
 B. Unified Contact: This is not the standard entity for
segmentation; it might refer to CRM-specific data.
 D. Individual: Refers to a general data model object, but it lacks
the consolidation provided by Identity Resolution, which is
essential for segmentation and activation.
Question 41Skipped
A Data Cloud consultant is evaluating the initial phase of the
Data Cloud lifecycle for a company.

Which action is essential to effectively begin the Data Cloud


lifecycle?

Correct answer
Identify use cases and the required data sources and data
quality.
Analyze and partition the data into data spaces.
Migrate the existing data into the Customer 360 Data Model.
Use calculated insights determine the benefits of Data Cloud for
this company.
Overall explanation
The first and most essential step in the Data Cloud lifecycle is to clearly
identify the company's use cases, the data sources required to support
them, and the quality of the data available. This foundational step
ensures that the implementation is aligned with business goals, and that
the necessary data is prepared for ingestion, modeling, and activation in
Data Cloud.
 B. Partitioning data into data spaces happens later in the process,
during data organization.
 C. Migrating data into the Customer 360 Data Model is a
subsequent step once use cases and required data are identified.
 D. Using calculated insights comes much later in the lifecycle,
typically after data ingestion, unification, and preparation for
analysis.
Question 42Skipped
A Data Cloud consultant tries to save a new 1-to-l relationship
between the Account DMO and Contact Point Address DMO but
gets an error.

What should the consultant do to fix this error?

Map additional fields to the Contact Point Address DMO.


Make sure that the total account records are high enough for
Identity resolution.
Correct answer
Change the cardinality to many-to-one to accommodate multiple
contacts per account.
Map Account to Contact Point Email and Contact Point Phone
also.
Overall explanation
1. The Account DMO can have multiple associated Contact Point
Addresses (e.g., billing address, shipping address, etc.).
2. If the consultant is trying to define a 1-to-1 relationship, but
multiple addresses exist for a single account, the system will
trigger an error.
3. To resolve this, the cardinality should be updated to many-to-
one (M:1) so that multiple Contact Point Addresses can be
linked to a single Account properly.
Question 43Skipped
A consultant wants to make sure address details from customer
orders are selected as best to save to the unified profile.
What should the consultant do to achieve this?
Select the address details on the Contact Point Address. Change
the reconciliation rules for the specific address attributes to
Source Priority and move the Individual DMO to the bottom.
Use the default reconciliation rules for Contact Point Address.
Correct answer
Select the address details on the Contact Point Address. Change
the reconciliation rules for the specific address attributes to
Source Priority and move the Oder DMO to the top.
Change the default reconciliation rules for Individual to Source
Priority.
Overall explanation
1. To ensure that the address details from customer orders are
prioritized as the best address for the unified profile, the
consultant needs to focus on the Contact Point
Address attributes and set reconciliation rules accordingly.
2. By setting the reconciliation rules for address attributes
to Source Priority and moving the Order DMO to the top, the
system will prioritize address details from orders over other
potential sources when determining the best address for the
unified profile
Question 44Skipped
Which data stream category type should be assigned In order to
use the dataset for date and time-based operations in
segmentation and calculated insights?
Individual
profile
Sales Order
Correct answer
Engagement
Overall explanation
1. Engagement data streams are specifically used for date and
time-based operations in segmentation and calculated
insights.
2. This category is designed for event-driven data like interactions,
transactions, or customer behaviors that happen over time.
3. Engagement datasets typically include timestamps that allow
users to create time-based segmentations, track customer
activities, and analyze trends over a period.
Resources
Salesforce doc
Question 45Skipped
What are the two minimum requirements needed when using
the Visual Insights Builder to create a Calculated Insight?
WHERE clause is required
At least two objects to join
Correct selection
At least one dimension
Correct selection
At least one measure
Overall explanation
 Dimension: Represents categorical data (e.g., region, product
category) used to group or segment the insights.
 Measure: Represents numerical data or metrics (e.g., revenue,
count of orders) that can be aggregated or calculated.

Both are essential to create meaningful insights, as the measure


provides the quantitative value to analyze, while the dimension provides
the context or grouping for the analysis.

Question 46Skipped
Which tool allows users to visualize and analyze unified
customer data in Data Cloud?
Salesforce CLI
Heroku
Einstein Analytics
Correct answer
Tableau
Overall explanation
 Unlock Hidden Insights with Tableau
After you’ve combined, unified, and harmonized your data in Data
Cloud, it’s now ready for you to analyze. Use Tableau to find useful
and actionable insights that can drive business success and create
a personalized experience for your customers. Data Cloud,
Tableau, and Salesforce 360 apps work together to consolidate
your data and create a single customer record. Unlock hidden
insights, and make decisions based on all your customer data.

Question 47Skipped
What is the role of artificial intelligence (AI) in Data Cloud?
Automating data validation
Creating dynamic data-driven management dashboards
Correct answer
Enhancing customer interactions through insights and
predictions
Generating email templates for use cases
Overall explanation
Role of AI in Data Cloud: Artificial intelligence (AI) plays a crucial role in
Salesforce Data Cloud by leveraging data to generate insights and
predictions that enhance customer interactions.

Insights and Predictions:

AI Algorithms: Use machine learning algorithms to analyze vast amounts


of customer data.

Predictive Analytics: Provide predictive insights, such as customer


behavior trends, preferences, and potential future actions.

Enhancing Customer Interactions:

Personalization: AI helps in creating personalized experiences by


predicting customer needs and preferences.

Efficiency: Enables proactive customer service by predicting issues and


suggesting solutions before customers reach out.

Marketing: Improves targeting and segmentation, ensuring that


marketing efforts are directed towards the most promising leads and
customers.

Use Cases:

Recommendation Engines: Suggest products or services based on past


behavior and preferences.

Churn Prediction: Identify customers at risk of leaving and engage them


with retention strategies.

Question 48Skipped
What is the primary purpose of Data Cloud?
Providing a golden record of a customer
Managing sales cycles and opportunities
Analyzing marketing data results
Correct answer
Integrating and unifying customer data
Overall explanation
Salesforce Data Cloud's main function is to integrate and unify customer
data from various sources, creating a single, comprehensive view of
each customer.
Benefits of Data Integration and Unification:

Golden Record: Providing a unified, accurate view of the customer.

Enhanced Analysis: Enabling better insights and analytics through


comprehensive data.

Improved Customer Engagement: Facilitating personalized and


consistent customer experiences across channels.

Steps for Data Integration:

Ingest data from multiple sources (CRM, marketing, service platforms).

Use data harmonization and reconciliation processes to unify data into a


single profile.

Practical Application:

Example: A retail company integrates customer data from online


purchases, in-store transactions, and customer service interactions to
create a unified customer profile.

This unified data enables personalized marketing campaigns and


improved customer service.

Question 49Skipped
A customer notices that their consolidation rate is low across
their account unification. They have mapped Account to the
Individual and Contact Point Email DMOs.

What should they do to increase their consolidation rate?

Change reconciliation rules to Most Occurring.


Disable the individual identity ruleset.
Correct answer
Increase the number of matching rules.
Update their account address details in the data source
Overall explanation
A low consolidation rate means that fewer records are being unified than
expected. By increasing the number of matching rules, you provide
additional criteria for linking records from different sources. This
improves the chances of identifying and consolidating duplicate or
related records into unified profiles.
Question 50Skipped
A consultant is setting up a data stream with transactional data,
Which field typeshould the consultant choose toensure that
leading

zeros in the purchase order number are preserved?

Correct answer
Text
Number
Decimal
Serial
Overall explanation
To ensure that leading zeros in the purchase order number are
preserved, the field type should be set to Text. Numeric field types such
as Number or Decimal automatically drop leading zeros because they
treat the value as a numerical quantity. Using a Text field preserves the
exact formatting, including any leading zeros, as the value is treated as
a string rather than a number.
Question 51Skipped
Every day, Northern Trail Outfitters uploads a summary of the
last 24 hours of store transactions to a new file in an Amazon S3
bucket, and files older than seven days are automatically
deleted. Each file contains a timestamp in a standardized
naming convention.
Which two options should a consultant configure when ingesting
this data stream?
Choose 2 answers
Ensure that deletion of old files is enabled.
Correct selection
Ensure the refresh mode is set to "Upsert".
Correct selection
Ensure the filename contains a wildcard toa accommodate the
timestamp.
Ensure the refresh mode is set to "Full Refresh.''
Overall explanation
When ingesting data from an Amazon S3 bucket, the consultant should
configure the following options:

* The refresh mode should be set to "Upsert", which means that new and
updated records will be added or updated in Data Cloud, while existing
records will be preserved. This ensures that the data is always up to
date and consistent with the source.
* The filename should contain a wildcard to accommodate the
timestamp, which means that the file name pattern should include a
variable part that matches the timestamp format. For example, if the file
name is store_transactions_2023-12-18.csv, the wildcard could be
store_transactions_*.csv. This ensures that the ingestion process can
identify and process the correct file every day.

The other options are not necessary or relevant for this scenario:

 Deletion of old files is a feature of the Amazon S3 bucket, not the


Data Cloud ingestion process. Data Cloud does not delete any files
from the source, nor does it require the source files to be deleted
after ingestion.
 Full Refresh is a refresh mode that deletes all existing records in
Data Cloud and replaces them with the records from the source
file. This is not suitable for this scenario, as it would result indata
loss and inconsistency, especially if the source file only contains
the summary of the last 24 hours of
 transactions. References: Ingest Data from Amazon S3, Refresh
Modes
Question 52Skipped
A customer has a calculated insight about lifetime value.
What does the consultant need to be aware of if the calculated
insight.
needs to be modified?
New dimensions can be added.
Existing dimensions can be removed.
Existing measures can be removed.
Correct answer
New measures can be added.
Overall explanation
 You can add a measure to an existing calculated insight.
 You can’t change the API name, data type, or rollup behavior.
 You can add only aggregatable measures to an aggregatable
calculated insight.
 You can add any measure to a non-aggregatable calculated
insight.
 You can’t remove existing measures.
Resources
Salesforce Doc
Question 53Skipped
What is a reason to create a formula when ingesting a data
stream?
Correct answer
To transform a datetime field into a date field for use in data
mapping
To concatenate files so they are ingested in the correct
sequence
To remove duplicate rows of data from the data stream
To add a unique external identifier to an existing ruleset
Overall explanation
To transform a datetime field into a date field for use in data
mapping.

Here's why:

When ingesting a data stream, you might need to transform the data to
ensure it's in the correct format for analysis or to match the data
structure required by your system. For example, if a datetime field
contains both the date and time, but you only need the date (without the
time) for your data mapping or analysis purposes, you would create a
formula to convert that datetime field into just a date field. This ensures
the data is correctly formatted and usable in the context you need

Question 54Skipped
A user has built a segment in Data Cloud and is in the process of
creating an activation. When selecting related attributes, they
cannot find a specific set of attributes they know to be related
to the

individual.

Which statement explains why these attributes are not


available?

The segment is not segmenting on profile data.


The attributes are being used in another activation.
Correct answer
The desired attributes reside on different related paths.
Activations can only include 1-to-1 attributes.
Overall explanation
the desired attributes reside on different related paths. When creating
an activation in Data Cloud, you can select related attributes from data
model objects that are linked to the segment entity. However, not all
related attributes are available for every activation. The availability of
related attributes depends on the container path, which is the sequence
of data model objects that connects the segment entity to the related
entity. For example, if you segment on the Unified Individual entity, you
can select related attributes from the Order Product entity, but only if
the container path is Unified Individual > Order > Order Product. If the
container path is Unified Individual > Order Line Item > Order Product,
then the related attributes from Order Product are not available for
activation. This is because Data Cloud only supports one-to-many
relationships for related attributes, and Order Line Item is a many-to-
many junction object between Order and Order Product. Therefore, you
need to ensure that the desired attributes reside on the same related
path as the segment entity, and that the path does not include any
many-to-many junction objects. The other options are incorrect because
they do not explain why the related attributes are not available. The
segment entity can be any data model object, not just profile data. The
attributes are not restricted by being used in another activation.
Activations can include one-to-many attributes, not just one-to-one
attributes
Question 55Skipped
A customer has outlined requirements to trigger a journey for
an abandoned browse behavior. Based on the requirements, the
consultant determines they will use streaming insights to
trigger a data action to Journey Builder every hour.

How should the consultant configure the solution to ensure the


data action is triggered at the cadence required?

Set the activation schedule to hourly.


Configure the data to be ingested in hourly batches.
Set the journey entry schedule to run every hour.
Correct answer
Set the insights aggregation time window to 1 hour.
Overall explanation
Streaming insights are computed from real-time engagement events and
can be used to trigger data actions based on pre-set rules. Data actions
are workflows that send data from Data Cloud to other systems, such as
Journey Builder. To ensure that the data action is triggered every hour,
the consultant should set the insights aggregation time window to 1
hour. This means that the streaming insight will evaluate the events that
occurred within the last hour and execute the data action if the
conditions are met. The other options are not relevant for streaming
insights and data actions.Reference:Streaming Insights and Data Actions
Limits and Behaviors,Streaming Insights,Streaming Insights and Data
Actions Use Cases,Use Insights in Data Cloud,6 Ways the Latest
Marketing Cloud Release Can Boost Your Campaigns
Question 56Skipped
A Data Cloud consultant recently added a new data source and
mapped some of the data to a new custom data model object
(DMO) that they want to use for creating segments. However,
they cannot view the newly created DMO when trying to create a
new segment.
What is the cause of this issue?

Data has not yes been ingested into the DMO.


Correct answer
The new DMO is not of category Profile.
The new DMO does not have a relationship to the individual
DMO
Segmentation is only supported for the Individual and Unified
Individual DMOs.
Overall explanation
The cause of this issue is that the new custom data model object (DMO)
is not of category Profile. A category is a property of a DMO that defines
its purpose and functionality in Data Cloud. There are three categories of
DMOs: Profile, Event, and Other. Profile DMOs are used to store
attributes of individuals or entities, such as name, email, address, etc.
Event DMOs are used to store actions or interactions of individuals or
entities, such as purchases, clicks, visits, etc. Other DMOs are used to
store any other type of data that does not fit into the Profile or Event
categories, such as products, locations, categories, etc. Only Profile
DMOs can be used for creating segments in Data Cloud, as segments are
based on the attributes of individuals or entities. Therefore, if the new
custom DMO is not of category Profile, it will not appear in the
segmentation canvas. The other options are not correct because they
are not the cause of this issue. Data ingestion is not a prerequisite for
creating segments, as segments can be created based on the data
model schema without actual data. The new DMO does not need to have
a relationship to the individual DMO, as segments can be created based
on any Profile DMO, regardless of its relationship to other DMOs.
Segmentation is not only supported for the Individual and Unified
Individual DMOs, as segments can be created based on any Profile DMO,
including custom ones.
Question 57Skipped
A consultant is helping a beauty company ingest its profile data
into Data Cloud. The company’s source data includes several
fields, such as eye color, skin type, and hair color, that are not
fields in the standard Individual data model object (DMO). What
should the consultant recommend to map this data to be used
for both segmentation and identity resolution?
Create a custom DMO from scratch that has all fields that are
needed.
Create a custom DMO with only the additional fields and map it
to the standard Individual DMO.
Correct answer
Create custom fields on the standard Individual DMO.
Duplicate the standard Individual DMO and add the additional
fields.
Overall explanation
The best option to map the data to be used for both segmentation and
identity resolution is to create custom fields on the standard Individual
DMO. This way, the consultant can leverage the existing fields and
functionality of the Individual DMO, such as identity resolution rulesets,
calculated insights, and data actions, while adding the additional fields
that are specific to the beauty company’s data1. Creating a custom DMO
from scratch or duplicating the standard Individual DMO would require
more effort and maintenance, and might not be compatible with the
existing features of Data Cloud. Creating a custom DMO with only the
additional fields and mapping it to the standard Individual DMO would
create unnecessary complexity and redundancy, and might not allow the
use of the custom fields for identity resolution
Question 58Skipped
A customer has a custom Customer Email c object related to the
standard Contact object in Salesforce CRM.

This custom object stores the email addressa Contact that they
want to use for activation.

To which data entity is mapped?

Contact
Correct answer
Contact Point_Email
Custom customer Email c object
Individual
Overall explanation
The Contact Point_Email object is the data entity that represents an
email address associated with an individual in Data Cloud. It is part of
the Customer 360 Data Model, which is a standardized data model that
defines common entities and relationships for customer data. The
Contact Point_Email object can be mapped to any custom or standard
object that stores email addresses in Salesforce CRM, such as the
custom Customer Email c object. The other options are not the correct
data entities to map to because: ? A. The Contact object is the data
entity that represents a person who is associated with an account that is
a customer, partner, or competitor in Salesforce CRM. It is not the data
entity that represents an email address in Data Cloud. ? C. The custom
Customer Email c object is not a data entity in Data Cloud, but a custom
object in Salesforce CRM. It can be mapped to a data entity in Data
Cloud, such as the Contact Point_Email object, but it is not a data entity
itself. ? D. The Individual object is the data entity that represents a
unique person in Data Cloud. It is the core entity for managing consent
and privacy preferences, and it can be related to one or more contact
points, such as email addresses, phone numbers, or social media
handles. It is not the data entity that represents an email address in
Data Cloud. References: Customer 360 Data Model: Individual and
Contact Points - Salesforce, Contact Point_Email | Object Reference for
the Salesforce Platform | Salesforce Developers, [Contact | Object
Reference for the Salesforce Platform | Salesforce Developers],
[Individual | Object Reference for the Salesforce Platform | Salesforce
Developers]
Question 59Skipped
A healthcare client wants to make use of identity resolution, but
does not want to risk unifying profiles that may share certain
personally identifying information (PII).
Which matching rule criteria should a consultant recommend for
the most accurate matching results?
Correct answer
Party Identification on Patient ID
Exact Last Name and Emil
Email Address and Phone
Fuzzy First Name, Exact Last Name, and Email
Overall explanation
Identity resolution is the process of linking data from different sources
into a unified profile of a customer or an individual. Identity resolution
uses matching rules to compare the attributes of different records and
determine if they belong to the same person. Matching rules can be
based on exact or fuzzy matching of various attributes, such as name,
email, phone, address, or custom identifiers. A healthcare client who
wants to use identity resolution, but does not want to risk unifying
profiles that may share certain personally identifying information (PII),
such as name or email, should use a matching rule criteria that is based
on a unique and reliable identifier that is specific to the healthcare
domain. One such identifier is the patient ID, which is a unique number
assigned to each patient by a healthcare provider or system. By using
the party identification on patient ID as a matching rule criteria, the
healthcare client can ensure that only records that have the same
patient ID are matched and unified, and avoid false positives or false
negatives that may occur due to common or similar names or emails.
The party identification on patient ID is also a secure and compliant way
of handling sensitive healthcare data, as it does not expose or share any
PII that may be subject to data protection regulations or standards.
References: Configure Identity Resolution Rulesets, A framework of
identity resolution: evaluating identity attributes and methods
Question 60Skipped
The recruiting team at Cumulus Financial wants to identify
which candidates have browsed the jobs page on its website at
least twice within the last 24 hours. They want the information
about these candidates to be available for segmentation in Data
Cloud and the candidates added to their recruiting system.

Which feature should a consultant recommend to achieve this


goal?

Streaming data transform


Correct answer
Streaming insight
Calculated insight
Batch bata transform
Overall explanation
A streaming insight is a feature that allows users to create and monitor
real-time metrics from streaming data sources, such as web and mobile
events. A streaming insight can also trigger data actions, such as
sending notifications, creating records, or updating fields, based on the
metric values and conditions. Therefore, a streaming insight is the best
feature to achieve the goal of identifying candidates who have browsed
the jobs page on the website at least twice within the last 24 hours, and
adding them to the recruiting system. The other options are incorrect
because:

* A streaming data transform is a feature that allows users to transform


and enrich streaming data using SQL expressions, such as filtering,
joining, aggregating, or calculating values. However, a streaming data
transform does not provide the ability to monitor metrics or trigger data
actions based on conditions.

* A calculated insight is a feature that allows users to define and


calculate multidimensional metrics from data using SQL expressions,
such as LTV, CSAT, or average order value. However, a calculated insight
is not suitable for real-time data analysis, as it runs on a scheduled basis
and does not support data actions.

* A batch data transform is a feature that allows users to create and


schedule complex data transformations using a visual editor, such as
joining, aggregating, filtering, or appending data.
However, a batch data transform is not suitable for real-time data
analysis, as it runs on a scheduled basis and does not support data
actions. References: Streaming Insights, Create a Streaming Insight, Use
Insights in Data Cloud, Learn About Data Cloud Insights, Data Cloud
Insights Using SQL, Streaming Data Transforms, Get Started with Batch
Data Transforms in Data Cloud, Transformations for Batch Data
Transforms, Batch Data Transforms in Data Cloud: Quick Look,
Salesforce Data Cloud: AI CDP.

Question 61Skipped
Cumulus Financial wants its service agents to view a display of
all cases associated with a Unified Individual on a contact
record.

Which two features should a consultant consider for this use


case?

Data Action
Correct selection
Profile API
Correct selection
Lightning Web Components
Query APL
Overall explanation
A Unified Individual is a profile that combines data from multiple sources
using identity resolution rules in Data Cloud. A Unified Individual can
have multiple contact points, such as email, phone, or address, that link
to different systems and records. A consultant can use the following
features to display all cases associated with a Unified Individual on a
contact record:

* Profile API: This is a REST API that allows you to retrieve and update
Unified Individual profiles and related attributes in Data Cloud. You can
use the Profile API to query the cases that are related to a Unified
Individual by using the contact point ID or the unified ID as a filter. You
can also use the Profile API to update the Unified Individual profile with
new or modified case information from other systems.

* Lightning Web Components: These are custom HTML elements that


you can use to create reusable UI components for your Salesforce apps.
You can use Lightning Web Components to create a custom component
that displays the cases related to a Unified Individual on a contact
record. You can use the Profile API to fetch the data from Data Cloud and
display it in a table, list, or chart format. You can also use Lightning Web
Components to enable actions, such as creating, editing, or deleting
cases, from the contact record.

The other two options are not relevant for this use case. A Data Action is
a type of action that executes a flow, a data action target, or a data
action script when an insight is triggered. A Data Action is used for
activation and personalization, not for displaying data on a contact
record. A Query APL is a query language that allows you to access and
manipulate data in Data Cloud. A Query APL is used for data exploration
and analysis, not for displaying data on a contact record. References:
Profile API Developer Guide, Lightning Web Components Developer
Guide, Create Unified Individual Profiles Unit

Question 62Skipped
A customer has a requirement to be able to view the last time
each segment was published within their Data Cloud org.

Which two features should the consultant recommend to best


address this requirement?

Choose 2 answers

Profile Explorer
Calculated insight
Correct selection
Dashboard
Correct selection
Report
Overall explanation
A customer who wants to view the last time each segment was
published within their Data Cloud org can use the dashboard and report
features to achieve this requirement. A dashboard is a visual
representation of data that can show key metrics, trends, and
comparisons. A report is a tabular or matrix view of data that can show
details, summaries, and calculations. Both dashboard and report
features allow the user to create, customize, and share data views based
on their needs and preferences. To view the last time each segment was
published, the user can create a dashboard or a report that shows the
segment name, the publish date, and the publish status fields from the
segment object. The user can also filter, sort, group, or chart the data by
these fields to get more insights and analysis. The user can also
schedule, refresh, or export the dashboard or report data as needed.
References: Dashboards, Reports
Question 63Skipped
The leadership team at Cumulus Financial has determined that
customers who deposited more than $250,000 in the last five
years and are not using advisory services will be the central
focus for all new campaigns in the next year.
Which features support this use case?
Calculated insight and data action
Correct answer
Calculated insight and segment
Streaming insight and segment
Streaming insight and data action
Overall explanation
1. Calculated Insight: This feature can be used to create a metric,
such as lifetime deposits, by calculating the total deposits over the
last five years. This metric helps identify customers who deposited
more than $250,000.
2. Segment: After identifying the right criteria through a calculated
insight, a segment can be created to filter customers who meet
the criteria (deposited more than $250,000 and not using advisory
services). This segment can then be used for targeting the new
campaigns.
Question 64Skipped
Northern Trail Outfitters (NTO) is getting ready to start
ingesting its CRM data into Data Cloud.

While setting up the connector, which type of refresh should


NTO expect when the data stream is deployed for the first time?

Incremental
Manual refresh
Partial refresh
Correct answer
Full refresh
Overall explanation
Data Stream Deployment: When setting up a data stream in Salesforce
Data Cloud, the initial deployment requires a comprehensive data load.

Types of Refreshes:

* Incremental Refresh: Only updates with new or changed data since


the last refresh.

* Manual Refresh: Requires a user to manually initiate the data load.


* Partial Refresh: Only a subset of the data is refreshed.

* Full Refresh: Loads the entire dataset into the system.

First-Time Deployment: For the initial deployment of a data stream, a full


refresh is necessary to ensure all data from the source system is
ingested into Salesforce Data Cloud

You might also like