0% found this document useful (0 votes)
1K views47 pages

DP 203 ExamTopics

The document describes requirements for altering a table in an Azure Synapse Analytics dedicated SQL pool to: 1) Identify current employee managers 2) Support an employee reporting hierarchy 3) Provide fast lookup of manager attributes The best option is to add a column named [ManagerEmployeeKey] that is nullable and an integer, to link employees to their managers.

Uploaded by

Nacho Mullen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views47 pages

DP 203 ExamTopics

The document describes requirements for altering a table in an Azure Synapse Analytics dedicated SQL pool to: 1) Identify current employee managers 2) Support an employee reporting hierarchy 3) Provide fast lookup of manager attributes The best option is to add a column named [ManagerEmployeeKey] that is nullable and an integer, to link employees to their managers.

Uploaded by

Nacho Mullen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

1. You have a table in an Azure Synapse Analytics dedicated SQL pool.

The table was created by


using the following Transact-SQL statement.

You need to alter the table to meet the following requirements:

✑ Ensure that users can identify the current manager of employees.

✑ Support creating an employee reporting hierarchy for your entire company.

✑ Provide fast lookup of the managers' attributes such as name and job title.

Which column should you add to the table?

A. [ManagerEmployeeID] [smallint] NULL

B. [ManagerEmployeeKey] [smallint] NULL

C. [ManagerEmployeeKey] [int] NULL

D. [ManagerName] [varchar](200) NULL

2. You have an Azure Synapse workspace named MyWorkspace that contains an Apache Spark
database named mytestdb.

You run the following command in an Azure Synapse Analytics Spark pool in MyWorkspace.

CREATE TABLE mytestdb.myParquetTable(

EmployeeID int,

EmployeeName string,

EmployeeStartDate date)

USING Parquet - You then use Spark to insert a row into mytestdb.myParquetTable. The row contains the
following data.

One minute later, you execute the following query from a serverless SQL pool in MyWorkspace.

SELECT EmployeeID FROM mytestdb.dbo.myParquetTable WHERE name = 'Alice';

What will be returned by the query?

A. 24

B. an error (Most Voted)

C. a null value
3. You have a table named SalesFact in an enterprise data warehouse in Azure Synapse Analytics.
SalesFact contains sales data from the past 36 months and has the following characteristics:

✑ Is partitioned by month

✑ Contains one billion rows

✑ Has clustered columnstore indexes

At the beginning of each month, you need to remove data from SalesFact that is older than 36 months as
quickly as possible.

Which three actions should you perform in sequence in a stored procedure?

4. You have files and folders in Azure Data Lake Storage Gen2 for an Azure Synapse workspace as
shown in the following exhibit.

You create an external table named ExtTable that has LOCATION='/topfolder/'.

When you query ExtTable by using an Azure Synapse Analytics serverless SQL pool,
which files are returned?
A. File2.csv and File3.csv only
B. File1.csv and File4.csv only (Most Voted)
C. File1.csv, File2.csv, File3.csv, and File4.csv
D. File1.csv only

5. You are planning the deployment of Azure Data Lake Storage Gen2.
You have the following two reports that will access the data lake:

✑ Report1: Reads 3 columns from a file that contains 50 columns.

✑ Report2: Queries a single record based on a timestamp.

You need to recommend in which format to store the data in the data lake to support the reports. The
solution must minimize read times.

What should you recommend for each report?

6. You are designing the folder structure for an Azure Data Lake Storage Gen2 container.

Users will query data by using a variety of services including Azure Databricks and Azure Synapse
Analytics serverless SQL pools. The data will be secured by subject area. Most queries will include data
from the current year or current month.

Which folder structure should you recommend to support fast queries and simplified folder
security?

A. /{SubjectArea}/{DataSource}/{DD}/{MM}/{YYYY}/{FileData}_{YYYY}_{MM}_{DD}.csv

B. /{DD}/{MM}/{YYYY}/{SubjectArea}/{DataSource}/{FileData}_{YYYY}_{MM}_{DD}.csv

C. /{YYYY}/{MM}/{DD}/{SubjectArea}/{DataSource}/{FileData}_{YYYY}_{MM}_{DD}.csv

D. /{SubjectArea}/{DataSource}/{YYYY}/{MM}/{DD}/{FileData}_{YYYY}_{MM}_{DD}.csv

7. You need to output files from Azure Data Factory.


Which file format should you use for each type of output?

8. You use Azure Data Factory to prepare data to be queried by Azure Synapse Analytics
serverless SQL pools.

Files are initially ingested into an Azure Data Lake Storage Gen2 account as 10 small JSON files. Each file
contains the same data attributes and data from a subsidiary of your company.

You need to move the files to a different folder and transform the data to meet the following requirements:

✑ Provide the fastest possible query times.

✑ Automatically infer the schema from the underlying files.

How should you configure the Data Factory copy activity?

9. You have a data model that you plan to implement in a data warehouse in Azure Synapse
Analytics as shown in the following exhibit.
All the dimension tables will be less than 2 GB after compression, and the fact table will be approximately
6 TB. The dimension tables will be relatively static with very few data inserts and updates.

Which type of table should you use for each table?

NOTE: In this case, all dimension tables must use REPLICATED and the fact table must use HASH.

Common distribution methods for tables:

 Fact: Use HASH distribution with clustered columnstore index.

Performance improves when two hash tables are joined on the same distribution column.

 Dimension: Use REPLICATED for smaller tables.

If tables are too large to store on each Compute node, use HASH.

 Staging: Use ROUND ROBIN, The load with CTAS is fast.

10. You have an Azure Data Lake Storage Gen2 container.

Data is ingested into the container, and then transformed by a data integration application.

The data is NOT modified after that. Users can read files in the container but cannot modify the files.

You need to design a data archiving solution that meets the following requirements:

✑ New data is accessed frequently and must be available as quickly as possible.

✑ Data that is older than five years is accessed infrequently but must be available within one second
when requested.

✑ Data that is older than seven years is NOT accessed. After seven years, the data must be persisted at
the lowest cost possible.

✑ Costs must be minimized while maintaining the required availability.

How should you manage the data?

 New data = HOT.


 Data that is older than five years = COOL.
 Data that is older than seven years = ARCHIVED

11. You need to create a partitioned table in an Azure Synapse Analytics dedicated SQL pool.

How should you complete the Transact-SQL statement?


12. You need to design an Azure Synapse Analytics dedicated SQL pool that meets the following
requirements:

✑ Can return an employee record from a given point in time.

✑ Maintains the latest employee information.

✑ Minimizes query complexity.

How should you model the employee data?

A. as a temporal table

B. as a SQL graph table

C. as a degenerate dimension table

D. as a Type 2 slowly changing dimension (SCD) table

13. You have an enterprise-wide Azure Data Lake Storage Gen2 account. The data lake is
accessible only through an Azure virtual network named VNET1.

You are building a SQL pool in Azure Synapse that will use data from the data lake.
Your company has a sales team. All the members of the sales team are in an Azure Active Directory group
named Sales. POSIX controls are used to assign the

Sales group access to the files in the data lake.

You plan to load data to the SQL pool every hour.

You need to ensure that the SQL pool can load the sales data from the data lake.

Which three actions should you perform?

A. Add the managed identity to the Sales group.

B. Use the managed identity as the credentials for the data load process.

C. Create a shared access signature (SAS).

D. Add your Azure Active Directory (Azure AD) account to the Sales group.

E. Use the shared access signature (SAS) as the credentials for the data load process.

F. Create a managed identity.

14. You have an Azure Synapse Analytics dedicated SQL pool that contains the users shown in the
following table.

User1 executes a query on the database, and the query returns the results shown in the following
exhibit.

User1 is the only user who has access to the unmasked data.

15. You have an enterprise data warehouse in Azure Synapse Analytics.

Using PolyBase, you create an external table named [Ext].[Items] to query Parquet files stored in Azure
Data Lake Storage Gen2 without importing the data to the data warehouse.
The external table has three columns.

You discover that the Parquet files have a fourth column named ItemID.

Which command should you run to add the ItemID column to the external table?

This correct option:

16. You have two Azure Storage accounts named Storage1 and Storage2. Each account holds one
container and has the hierarchical namespace enabled. The system has files that contain data
stored in the Apache Parquet format.

You need to copy folders and files from Storage1 to Storage2 by using a Data Factory copy activity. The
solution must meet the following requirements:

✑ No transformations must be performed.

✑ The original folder structure must be retained.

✑ Minimize time required to perform the copy activity.

How should you configure the copy activity?

17. You have an Azure Data Lake Storage Gen2 container that contains 100 TB of data.

You need to ensure that the data in the container is available for read workloads in a secondary region if
an outage occurs in the primary region. The solution must minimize costs.
Which type of data redundancy should you use?

A. geo-redundant storage (GRS)

B. read-access geo-redundant storage (RA-GRS)

C. zone-redundant storage (ZRS)

D. locally-redundant storage (LRS)

18. You plan to implement an Azure Data Lake Gen 2 storage account.

You need to ensure that the data lake will remain available if a data center fails in the primary Azure
region. The solution must minimize costs.

Which type of replication should you use for the storage account?

A. geo-redundant storage (GRS)

B. geo-zone-redundant storage (GZRS)

C. locally-redundant storage (LRS)

D. zone-redundant storage (ZRS)

19. You have a SQL pool in Azure Synapse.

You plan to load data from Azure Blob storage to a staging table. Approximately 1 million rows of data will
be loaded daily. The table will be truncated before each daily load.

You need to create the staging table. The solution must minimize how long it takes to load the data to the
staging table.

How should you configure the table?

20. You are designing a fact table named FactPurchase in an Azure Synapse Analytics dedicated
SQL pool. The table contains purchases from suppliers for a retail store. FactPurchase will
contain the following columns.
FactPurchase will have 1 million rows of data added daily and will contain three years of data.

Transact-SQL queries similar to the following query will be executed daily.

SELECT -

SupplierKey, StockItemKey, IsOrderFinalized, COUNT(*)

FROM FactPurchase -

WHERE DateKey >= 20210101 -

AND DateKey <= 20210131 -

GROUP By SupplierKey, StockItemKey, IsOrderFinalized

Which table distribution will minimize query times?

A. replicated

B. hash-distributed on PurchaseKey

C. round-robin

D. hash-distributed on IsOrderFinalized

21. From a website analytics system, you receive data extracts about user interactions such as
downloads, link clicks, form submissions, and video plays.

The data contains the following columns.


You need to design a star schema to support analytical queries of the data. The star schema will contain
four tables including a date dimension.

To which table should you add each column?.

22. You have an Azure Storage account that contains 100 GB of files. The files contain rows of text
and numerical values. 75% of the rows contain description data that has an average length of 1.1
MB.

You plan to copy the data from the storage account to an enterprise data warehouse in Azure Synapse
Analytics.

You need to prepare the files to ensure that the data copies quickly.

Solution: You convert the files to compressed delimited text files.

Does this meet the goal?

A. Yes

B. No

23. You have an Azure Storage account that contains 100 GB of files. The files contain rows of text
and numerical values. 75% of the rows contain description data that has an average length of 1.1
MB.
You plan to copy the data from the storage account to an enterprise data warehouse in Azure Synapse
Analytics.

You need to prepare the files to ensure that the data copies quickly.

Solution: You copy the files to a table that has a columnstore index.

Does this meet the goal?

A. Yes

B. No

24. You have an Azure Storage account that contains 100 GB of files. The files contain rows of text
and numerical values. 75% of the rows contain description data that has an average length of 1.1
MB.

You plan to copy the data from the storage account to an enterprise data warehouse in Azure Synapse
Analytics.

You need to prepare the files to ensure that the data copies quickly.

Solution: You modify the files to ensure that each row is more than 1 MB.

Does this meet the goal?

A. Yes

B. No

25. You build a data warehouse in an Azure Synapse Analytics dedicated SQL pool.

Analysts write a complex SELECT query that contains multiple JOIN and CASE statements to transform
data for use in inventory reports. The inventory reports will use the data and additional WHERE
parameters depending on the report. The reports will be produced once daily.

You need to implement a solution to make the dataset available for the reports. The solution must
minimize query times.

What should you implement?

A. an ordered clustered columnstore index

B. a materialized view

C. result set caching

D. a replicated table

26. You have an Azure Synapse Analytics workspace named WS1 that contains an Apache Spark
pool named Pool1.

You plan to create a database named DB1 in Pool1.


You need to ensure that when tables are created in DB1, the tables are available automatically as external
tables to the built-in serverless SQL pool.

Which format should you use for the tables in DB1?

A. CSV

B. ORC

C. JSON

D. Parquet

27. You are planning a solution to aggregate streaming data that originates in Apache Kafka and is
output to Azure Data Lake Storage Gen2. The developers who will implement the stream
processing solution use Java.

Which service should you recommend using to process the streaming data?

A. Azure Event Hubs

B. Azure Data Factory

C. Azure Stream Analytics

D. Azure Databricks

28. You plan to implement an Azure Data Lake Storage Gen2 container that will contain CSV files.
The size of the files will vary based on the number of events that occur per hour.

File sizes range from 4 KB to 5 GB.

You need to ensure that the files stored in the container are optimized for batch processing.

What should you do?

A. Convert the files to JSON

B. Convert the files to Avro

C. Compress the files

D. Merge the files

29. You store files in an Azure Data Lake Storage Gen2 container. The container has the storage
policy shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on
the information presented in the graphic.

30. You are designing a financial transactions table in an Azure Synapse Analytics dedicated SQL
pool. The table will have a clustered columnstore index and will include the following columns:

✑ TransactionType: 40 million rows per transaction type

✑ CustomerSegment: 4 million per customer segment

✑ TransactionMonth: 65 million rows per month

AccountType: 500 million per account type

You have the following query requirements:

✑ Analysts will most commonly analyze transactions for a given month.

✑ Transactions analysis will typically summarize transactions by transaction type, customer segment,
and/or account type

You need to recommend a partition strategy for the table to minimize query times.

On which column should you recommend partitioning the table?

A. CustomerSegment

B. AccountType

C. TransactionType

D. TransactionMonth
31. You have an Azure Data Lake Storage Gen2 account named account1 that stores logs as shown
in the following table.

You do not expect that the logs will be accessed during the retention periods.

You need to recommend a solution for account1 that meets the following requirements:

✑ Automatically deletes the logs at the end of each retention period

✑ Minimizes storage costs

32. You plan to ingest streaming social media data by using Azure Stream Analytics. The data will be
stored in files in Azure Data Lake Storage, and then consumed by using Azure Databricks and
PolyBase in Azure Synapse Analytics.

You need to recommend a Stream Analytics data output format to ensure that the queries from
Databricks and PolyBase against the files encounter the fewest possible errors. The solution must
ensure that the files can be queried quickly and that the data type information is retained.

What should you recommend?

A. JSON

B. Parquet

C. CSV

D. Avro
33. You have an Azure Synapse Analytics dedicated SQL pool named Pool1. Pool1 contains a
partitioned fact table named dbo.Sales and a staging table named stg.Sales that has the
matching table and partition definitions.

You need to overwrite the content of the first partition in dbo.Sales with the content of the same
partition in stg.Sales. The solution must minimize load times.

What should you do?

A. Insert the data from stg.Sales into dbo.Sales.

B. Switch the first partition from dbo.Sales to stg.Sales.

C. Switch the first partition from stg.Sales to dbo.Sales.

D. Update dbo.Sales from stg.Sales.

34. You are designing a slowly changing dimension (SCD) for supplier data in an Azure Synapse
Analytics dedicated SQL pool.

You plan to keep a record of changes to the available fields.

The supplier data contains the following columns.

Which three additional columns should you add to the data to create a Type 2 SCD? Each correct
answer presents part of the solution.

A. surrogate primary key

B. effective start date

C. business key

D. last modified date

E. effective end date

F. foreign key
35. You have a Microsoft SQL Server database that uses a third normal form schema.

You plan to migrate the data in the database to a star schema in an Azure Synapse Analytics dedicated
SQL pool.

You need to design the dimension tables. The solution must optimize read operations.

What should you include in the solution? To answer, select the appropriate options in the answer area.

36. You are designing a partition strategy for a fact table in an Azure Synapse Analytics dedicated
SQL pool. The table has the following specifications:

✑ Contain sales data for 20,000 products.

Use hash distribution on a column named ProductID.

✑ Contain 2.4 billion records for the years 2019 and 2020.

Which number of partition ranges provides optimal compression and performance for the
clustered columnstore index?

A. 40

B. 240

C. 400

D. 2,400
37. You are creating dimensions for a data warehouse in an Azure Synapse Analytics dedicated SQL
pool.

You create a table by using the Transact-SQL statement shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the graphic.
38. You are designing a fact table named FactPurchase in an Azure Synapse Analytics dedicated
SQL pool. The table contains purchases from suppliers for a retail store. FactPurchase will
contain the following columns.

FactPurchase will have 1 million rows of data added daily and will contain three years of data.
Transact-SQL queries similar to the following query will be executed daily.

SELECT -
SupplierKey, StockItemKey, COUNT(*)

FROM FactPurchase -WHERE DateKey >= 20210101 -AND DateKey <= 20210131 –
GROUP By SupplierKey, StockItemKey

Which table distribution will minimize query times?

A. replicated
B. hash-distributed on PurchaseKey
C. round-robin
D. hash-distributed on DateKey

39. You are implementing a batch dataset in the Parquet format.

Data files will be produced be using Azure Data Factory and stored in Azure Data Lake Storage Gen2. The
files will be consumed by an Azure Synapse Analytics serverless SQL pool.

You need to minimize storage costs for the solution.

What should you do?

A. Use Snappy compression for files.

B. Use OPENROWSET to query the Parquet files.

C. Create an external table that contains a subset of columns from the Parquet files.

D. Store all data as string in the Parquet files.


40. You need to build a solution to ensure that users can query specific files in an Azure Data Lake
Storage Gen2 account from an Azure Synapse Analytics serverless SQL pool.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the
list of actions to the answer area and arrange them in the correct order.

41. You are designing a data mart for the human resources (HR) department at your company. The
data mart will contain employee information and employee transactions.

From a source system, you have a flat extract that has the following fields:

✑ EmployeeID

✑FirstName

✑ LastName

✑ Recipient

✑ GrossAmount

✑ TransactionID

✑ GovernmentID

✑ NetAmountPaid

✑ TransactionDate

You need to design a star schema data model in an Azure Synapse Analytics dedicated SQL pool for the
data mart.

Which two tables should you create? Each correct answer presents part of the solution.

A. a dimension table for Transaction

B. a dimension table for EmployeeTransaction

C. a dimension table for Employee

D. a fact table for Employee

E. a fact table for Transaction


42. You are designing a dimension table for a data warehouse. The table will track the value of the
dimension attributes over time and preserve the history of the data by adding new rows as the
data changes.

Which type of slowly changing dimension (SCD) should you use?

A. Type 0

B. Type 1

C. Type 2

D. Type 3

43. You have data stored in thousands of CSV files in Azure Data Lake Storage Gen2. Each file has
a header row followed by a properly formatted carriage return (/ r) and line feed (/n).

You are implementing a pattern that batch loads the files daily into an enterprise data warehouse in Azure
Synapse Analytics by using PolyBase.

You need to skip the header row when you import the files into the data warehouse. Before building the
loading pattern, you need to prepare the required database objects in Azure Synapse Analytics.

Which three actions should you perform in sequence?


44. You are building an Azure Synapse Analytics dedicated SQL pool that will contain a fact table for
transactions from the first half of the year 2020.

You need to ensure that the table meets the following requirements:

✑ Minimizes the processing time to delete data that is older than 10 years

✑ Minimizes the I/O for queries that use year-to-date values

How should you complete the Transact-SQL statement? To answer, select the appropriate options in the
answer area.

45. You are performing exploratory analysis of the bus fare data in an Azure Data Lake Storage
Gen2 account by using an Azure Synapse Analytics serverless SQL pool.

You execute the Transact-SQL query shown in the following exhibit.

What do the query results include?


A. Only CSV files in the tripdata_2020 subfolder.
B. All files that have file names that beginning with "tripdata_2020".
C. All CSV files that have file names that contain "tripdata_2020".
D. Only CSV that have file names that beginning with "tripdata_2020".
46. You use PySpark in Azure Databricks to parse the following JSON input.

You need to output the data in the following tabular format.

How should you complete the PySpark code?

47. You are designing an application that will store petabytes of medical imaging data.

When the data is first created, the data will be accessed frequently during the first week. After one month,
the data must be accessible within 30 seconds, but files will be accessed infrequently. After one year, the
data will be accessed infrequently but must be accessible within five minutes.

You need to select a storage strategy for the data. The solution must minimize costs.

Which storage tier should you use for each time frame?
48. You have an Azure Synapse Analytics Apache Spark pool named Pool1.

You plan to load JSON files from an Azure Data Lake Storage Gen2 container into the tables in Pool1. The
structure and data types vary by file.

You need to load the files into the tables. The solution must maintain the source data types.

What should you do?

A. Use a Conditional Split transformation in an Azure Synapse data flow.

B. Use a Get Metadata activity in Azure Data Factory.

C. Load the data by using the OPENROWSET Transact-SQL command in an Azure Synapse Analytics
serverless SQL pool.

D. Load the data by using PySpark.

49. You have an Azure Databricks workspace named workspace1 in the Standard pricing tier.
Workspace1 contains an all-purpose cluster named cluster1.

You need to reduce the time it takes for cluster1 to start and scale up. The solution must minimize
costs.

What should you do first?

A. Configure a global init script for workspace1.

B. Create a cluster policy in workspace1.

C. Upgrade workspace1 to the Premium pricing tier.

D. Create a pool in workspace1.


50. You are building an Azure Stream Analytics job that queries reference data from a product
catalog file. The file is updated daily.

The reference data input details for the file are shown in the Input exhibit.

You need to configure the Stream Analytics job to pick up the new reference data.

What should you configure?


51. You have the following Azure Stream Analytics query.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

52. You are building a database in an Azure Synapse Analytics serverless SQL pool.

You have data stored in Parquet files in an Azure Data Lake Storege Gen2 container.

Records are structured as shown in the following sample.

{"id": 123,"address_housenumber": "19c","address_line": "Memory Lane","applicant1_name": "Jane",

"applicant2_name": "Dev"

The records contain two applicants at most.

You need to build a table that includes only the address fields.

How should you complete the Transact-SQL statement?


53. You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and an Azure Data
Lake Storage Gen2 account named Account1.

You plan to access the files in Account1 by using an external table.

You need to create a data source in Pool1 that you can reference when you create the external table.

How should you complete the Transact-SQL statement?

54. You have an Azure subscription that contains an Azure Blob Storage account named storage1
and an Azure Synapse Analytics dedicated SQL pool named Pool1.

You need to store data in storage1. The data will be read by Pool1. The solution must meet the following
requirements:

Enable Pool1 to skip columns and rows that are unnecessary in a query.

✑ Automatically create column statistics.

✑ Minimize the size of files.

Which type of file should you use?

A. JSON

B. Parquet

C. Avro

D. CSV
55. You plan to create a table in an Azure Synapse Analytics dedicated SQL pool.

Data in the table will be retained for five years. Once a year, data that is older than five years will be
deleted.

You need to ensure that the data is distributed evenly across partitions. The solution must minimize the
amount of time required to delete old data.

How should you complete the Transact-SQL statement

56. You have an Azure Data Lake Storage Gen2 service.

You need to design a data archiving solution that meets the following requirements:

✑ Data that is older than five years is accessed infrequently but must be available within one second
when requested.

✑ Data that is older than seven years is NOT accessed.

✑ Costs must be minimized while maintaining the required availability.

How should you manage the data?


57. You plan to create an Azure Databricks workspace that has a tiered structure. The workspace will
contain the following three workloads:

✑ A workload for data engineers who will use Python and SQL.

✑ A workload for jobs that will run notebooks that use Python, Scala, and SQL.

✑ A workload that data scientists will use to perform ad hoc analysis in Scala and R.

The enterprise architecture team at your company identifies the following standards for Databricks
environments:

✑ The data engineers must share a cluster.

✑ The job cluster will be managed by using a request process where by data scientists and data
engineers provide packaged notebooks for deployment to the cluster.

✑ All the data scientists must be assigned their own cluster that terminates automatically after 120
minutes of inactivity. Currently, there are three data scientists.

You need to create the Databricks clusters for the workloads.

Solution: You create a Standard cluster for each data scientist, a High Concurrency cluster for the
data engineers, and a Standard cluster for the jobs.

Does this meet the goal?

A. Yes

B. No

58. You plan to create an Azure Databricks workspace that has a tiered structure. The workspace will
contain the following three workloads:

✑ A workload for data engineers who will use Python and SQL.

✑ A workload for jobs that will run notebooks that use Python, Scala, and SQL.

✑ A workload that data scientists will use to perform ad hoc analysis in Scala and R.

The enterprise architecture team at your company identifies the following standards for Databricks
environments:

✑ The data engineers must share a cluster.

✑ The job cluster will be managed by using a request process whereby data scientists and data engineers
provide packaged notebooks for deployment to the cluster.

✑ All the data scientists must be assigned their own cluster that terminates automatically after 120
minutes of inactivity. Currently, there are three data scientists.

You need to create the Databricks clusters for the workloads.

Solution: You create a Standard cluster for each data scientist, a High Concurrency cluster for the
data engineers, and a High Concurrency cluster for the jobs.

Does this meet the goal?

A. Yes

B. No
59. You plan to create a real-time monitoring app that alerts users when a device travels more than
200 meters away from a designated location.

You need to design an Azure Stream Analytics job to process the data for the planned app. The solution
must minimize the amount of code developed and the number of technologies used.

What should you include in the Stream Analytics job?

60. A company has a real-time data analysis solution that is hosted on Microsoft Azure. The solution
uses Azure Event Hub to ingest data and an Azure Stream

Analytics cloud job to analyze the data. The cloud job is configured to use 120 Streaming Units (SU).

You need to optimize performance for the Azure Stream Analytics job.

Which two actions should you perform? Each correct answer presents part of the solution.

A. Implement event ordering.

B. Implement Azure Stream Analytics user-defined functions (UDF).

C. Implement query parallelization by partitioning the data output.

D. Scale the SU count for the job up.

E. Scale the SU count for the job down.

F. Implement query parallelization by partitioning the data input.


61. You need to trigger an Azure Data Factory pipeline when a file arrives in an Azure Data Lake
Storage Gen2 container.

Which resource provider should you enable?

A. Microsoft.Sql

B. Microsoft.Automation

C. Microsoft.EventGrid

D. Microsoft.EventHub

62. You plan to perform batch processing in Azure Databricks once daily.

Which type of Databricks cluster should you use?

A. High Concurrency

B. automated

C. interactive

63. You are processing streaming data from vehicles that pass through a toll booth.

You need to use Azure Stream Analytics to return the license plate, vehicle make, and hour the last
vehicle passed during each 10-minute window.

How should you complete the query?


64. You have an Azure Data Factory instance that contains two pipelines named Pipeline1 and
Pipeline2.

Pipeline1 has the activities shown in the following exhibit

Pipeline2 has the activities shown in the following exhibit.

You execute Pipeline2, and Stored procedure1 in Pipeline1 fails.

What is the status of the pipeline runs?

A. Pipeline1 and Pipeline2 succeeded.

B. Pipeline1 and Pipeline2 failed.

C. Pipeline1 succeeded and Pipeline2 failed.

D. Pipeline1 failed and Pipeline2 succeeded.

65. A company plans to use Platform-as-a-Service (PaaS) to create the new data pipeline process.
The process must meet the following requirements:

Ingest:Access multiple data sources.Provide the ability to orchestrate workflow. Provide the capability to
run SQL Server Integration Services packages.

Store:Optimize storage for big data workloads,Provide encryption of data at rest.Operate with no size
limits.

Prepare and Train: Provide a fully-managed and interactive workspace for exploration and
visualization.Provide the ability to program in R, SQL, Python, Scala, and Java.

Provide seamless user authentication with Azure Active Directory.

Model & Serve:Implement native columnar storage.Support for the SQL languageProvide support for
structured streaming.

You need to build the data integration pipeline.


66. You have the following table named Employees.

You need to calculate the employee_type value based on the hire_date value.

How should you complete the Transact-SQL statement?

68. You have an Azure Synapse Analytics workspace named WS1.

You have an Azure Data Lake Storage Gen2 container that contains JSON-formatted files in the following
format.

You need to use the serverless SQL pool in WS1 to read the files.

How should you complete the Transact-SQL statement?


69. You have an Apache Spark DataFrame named temperatures. A sample of the data is shown in
the following table.

You need to produce the following table by using a Spark SQL query.

How should you complete the query?

70. You have an Azure Data Factory that contains 10 pipelines.

You need to label each pipeline with its main purpose of either ingest, transform, or load. The labels must
be available for grouping and filtering when using the monitoring experience in Data Factory.

What should you add to each pipeline?

A. a resource tag

B. a correlation ID

C. a run group ID

D. an annotation
71. The following code segment is used to create an Azure Databricks cluster

For each of the following statements, select YES if the statement is true. Otherwise, NO.

72. You are designing a statistical analysis solution that will use custom proprietary Python functions
on near real-time data from Azure Event Hubs.

You need to recommend which Azure service to use to perform the statistical analysis. The solution must
minimize latency.

What should you recommend?

A. Azure Synapse Analytics

B. Azure Databricks

C. Azure Stream Analytics

D. Azure SQL Database


73. You have an enterprise data warehouse in Azure Synapse Analytics that contains a table named
FactOnlineSales. The table contains data from the start of 2009 to the end of 2012.

You need to improve the performance of queries against FactOnlineSales by using table partitions. The
solution must meet the following requirements:

✑ Create four partitions based on the order date.

✑ Ensure that each partition contains all the orders places during a given calendar year.

How should you complete the T-SQL command?

74. You need to implement a Type 3 slowly changing dimension (SCD) for product category data in
an Azure Synapse Analytics dedicated SQL pool.

You have a table that was created by using the following Transact-SQL statement

Which two columns should you add to the table? Each correct answer presents part of the
solution.

A. [EffectiveStartDate] [datetime] NOT NULL,

B. [CurrentProductCategory] [nvarchar] (100) NOT NULL,

C. [EffectiveEndDate] [datetime] NULL,

D. [ProductCategory] [nvarchar] (100) NOT NULL,

E. [OriginalProductCategory] [nvarchar] (100) NOT NULL


75. You are designing an Azure Stream Analytics solution that will analyze Twitter data.

You need to count the tweets in each 10-second window. The solution must ensure that each tweet is
counted only once.

Solution: You use a hopping window that uses a hop size of 10 seconds and a window size of 10
seconds.

Does this meet the goal?

A. Yes

B. No

76. You are designing an Azure Stream Analytics solution that will analyze Twitter data.

You need to count the tweets in each 10-second window. The solution must ensure that each tweet is
counted only once.

Solution: You use a hopping window that uses a hop size of 5 seconds and a window size 10
seconds.

Does this meet the goal?

A. Yes

B. No

77. You are building an Azure Stream Analytics job to identify how much time a user spends
interacting with a feature on a webpage.

The job receives events based on user actions on the webpage. Each row of data represents an event.
Each event has a type of either 'start' or 'end'.

You need to calculate the duration between start and end events.

How should you complete the query?


78. You are creating an Azure Data Factory data flow that will ingest data from a CSV file, cast
columns to specified types of data, and insert the data into a table in an

Azure Synapse Analytic dedicated SQL pool. The CSV file contains three columns named username,
comment, and date.

The data flow already contains the following:

A source transformation.

A Derived Column transformation to set the appropriate types of data.

A sink transformation to land the data in the pool.

You need to ensure that the data flow meets the following requirements:

All valid rows must be written to the destination table.

Truncation errors in the comment column must be avoided proactively.

✑ Any rows containing comment values that will cause truncation errors upon insert must be written to a
file in blob storage.

Which two actions should you perform?

A. To the data flow, add a sink transformation to write the rows to a file in blob storage.

B. To the data flow, add a Conditional Split transformation to separate the rows that will cause
truncation errors.

C. To the data flow, add a filter transformation to filter out rows that will cause truncation errors.

D. Add a select transformation to select only the rows that will cause truncation errors.

79. You need to create an Azure Data Factory pipeline to process data for the following three
departments at your company: Ecommerce, retail, and wholesale. The solution must ensure that
data can also be processed for the entire company.

How should you complete the Data Factory data flow script?
80. You have an Azure Data Lake Storage Gen2 account that contains a JSON file for customers.
The file contains two attributes named FirstName and LastName.

You need to copy the data from the JSON file to an Azure Synapse Analytics table by using Azure
Databricks. A new column must be created that concatenates the FirstName and LastName values.

You create the following components:

✑ A destination table in Azure Synapse

✑ An Azure Blob storage container

✑ A service principal

Which five actions should you perform in sequence next in is Databricks notebook?

81. You build an Azure Data Factory pipeline to move data from an Azure Data Lake Storage Gen2
container to a database in an Azure Synapse Analytics dedicated SQL pool.

Data in the container is stored in the following folder structure.

/in/{YYYY}/{MM}/{DD}/{HH}/{mm}

The earliest folder is /in/2021/01/01/00/00. The latest folder is /in/2021/01/15/01/45.

You need to configure a pipeline trigger to meet the following requirements:

✑ Existing data must be loaded.

✑ Data must be loaded every 30 minutes.

✑ Late-arriving data of up to two minutes must be included in the load for the time at which the data
should have arrived.

How should you configure the pipeline trigger?


82. You are designing a real-time dashboard solution that will visualize streaming data from remote
sensors that connect to the internet. The streaming data must be aggregated to show the
average value of each 10-second interval. The data will be discarded after being displayed in the
dashboard.

The solution will use Azure Stream Analytics and must meet the following requirements:

✑ Minimize latency from an Azure Event hub to the dashboard.

✑ Minimize the required storage.

✑ Minimize development effort.

What should you include in the solution?

83. You have an Azure Stream Analytics job that is a Stream Analytics project solution in Microsoft
Visual Studio. The job accepts data generated by IoT devices in the JSON format.

You need to modify the job to accept data generated by the IoT devices in the Protobuf format.

Which three actions should you perform from Visual Studio on sequence?
84. You have an Azure Storage account and a data warehouse in Azure Synapse Analytics in the
UK South region.

You need to copy blob data from the storage account to the data warehouse by using Azure Data Factory.
The solution must meet the following requirements:

✑ Ensure that the data remains in the UK South region at all times.

✑ Minimize administrative effort.

Which type of integration runtime should you use?

A. Azure integration runtime

B. Azure-SSIS integration runtime

C. Self-hosted integration runtime

85. You have an Azure SQL database named Database1 and two Azure event hubs named HubA
and HubB. The data consumed from each source is shown in the following table.

You need to implement Azure Stream Analytics to calculate the average fare per mile by driver.
How should you configure the Stream Analytics input for each source?
86. You have an Azure Stream Analytics job that receives clickstream data from an Azure event hub.

You need to define a query in the Stream Analytics job. The query must meet the following requirements:

✑ Count the number of clicks within each 10-second window based on the country of a visitor.

✑ Ensure that each click is NOT counted more than once.

How should you define the Query?

A. SELECT Country, Avg(*) AS Average FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY


Country, SlidingWindow(second, 10)

B. SELECT Country, Count(*) AS Count FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY


Country, TumblingWindow(second, 10)

C. SELECT Country, Avg(*) AS Average FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY


Country, HoppingWindow(second, 10, 2)

D. SELECT Country, Count(*) AS Count FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY


Country, SessionWindow(second, 5, 10)

87. You are building an Azure Analytics query that will receive input data from Azure IoT Hub and
write the results to Azure Blob storage.

You need to calculate the difference in the number of readings per sensor per hour.

How should you complete the query?

88. You need to schedule an Azure Data Factory pipeline to execute when a new file arrives in an
Azure Data Lake Storage Gen2 container.

Which type of trigger should you use?

A. on-demand

B. tumbling window

C. schedule

D. event
89. You have two Azure Data Factory instances named ADFdev and ADFprod. ADFdev connects to
an Azure DevOps Git repository.

You publish changes from the main branch of the Git repository to ADFdev.

You need to deploy the artifacts from ADFdev to ADFprod.

What should you do first?

A. From ADFdev, modify the Git configuration.

B. From ADFdev, create a linked service.

C. From Azure DevOps, create a release pipeline.

D. From Azure DevOps, update the main branch.

90. You are developing a solution that will stream to Azure Stream Analytics. The solution will have
both streaming data and reference data.

Which input type should you use for the reference data?

A. Azure Cosmos DB

B. Azure Blob storage

C. Azure IoT Hub

D. Azure Event Hubs

91. You are designing an Azure Stream Analytics job to process incoming events from sensors in
retail environments.

You need to process the events to produce a running average of shopper counts during the previous 15
minutes, calculated at five-minute intervals.

Which type of window should you use?

A. snapshot

B. tumbling

C. hopping

D. sliding
92. You are designing a monitoring solution for a fleet of 500 vehicles. Each vehicle has a GPS
tracking device that sends data to an Azure event hub once per minute.

You have a CSV file in an Azure Data Lake Storage Gen2 container. The file maintains the expected
geographical area in which each vehicle should be.

You need to ensure that when a GPS position is outside the expected area, a message is added to
another event hub for processing within 30 seconds. The solution must minimize cost.

What should you include in the solution?

93. You are designing an Azure Databricks table. The table will ingest an average of 20 million
streaming events per day.

You need to persist the events in the table for use in incremental load pipeline jobs in Azure Databricks.
The solution must minimize storage costs and incremental load times.

What should you include in the solution?

A. Partition by DateTime fields.

B. Sink to Azure Queue storage.

C. Include a watermark column.

D. Use a JSON format for physical data storage.


94. You have a self-hosted integration runtime in Azure Data Factory.

The integration runtime has the following node details:


✑ Name: X-M
✑ Status: Running
✑ Version: 4.4.7292.1
✑ Available Memory: 7697MB
✑ CPU Utilization: 6%
✑ Network (In/Out): 1.21KBps/0.83KBps
✑ Concurrent Jobs (Running/Limit): 2/14
✑ Role: Dispatcher/Worker
✑ Credential Status: In Sync

95. You have an Azure Databricks workspace named workspace1 in the Standard pricing tier.

You need to configure workspace1 to support autoscaling all-purpose clusters. The solution must meet
the following requirements:

✑ Automatically scale down workers when the cluster is underutilized for three minutes.

✑ Minimize the time it takes to scale to the maximum number of workers.

✑ Minimize costs.

What should you do first?

A. Enable container services for workspace1.

B. Upgrade workspace1 to the Premium pricing tier.

C. Set Cluster Mode to High Concurrency.

D. Create a cluster policy in workspace1.


96. You use Azure Stream Analytics to receive data from Azure Event Hubs and to output the data
to an Azure Blob Storage account.

You need to output the count of records received from the last five minutes every minute.

Which windowing function should you use?

A. Session

B. Tumbling

C. Sliding

D. Hopping

97. You configure version control for an Azure Data Factory instance as shown in the following
exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the graphic
98. You are designing an Azure Stream Analytics solution that receives instant messaging data from
an Azure Event Hub.

You need to ensure that the output from the Stream Analytics job counts the number of messages per
time zone every 15 seconds.

How should you complete the Stream Analytics query?

You might also like