Power Bi Guidance
Power Bi Guidance
Power BI guidance
e OVERVIEW
Whitepapers overview
p CONCEPT
Data modeling
p CONCEPT
One-to-one relationships
Many-to-many relationships
Bi-directional relationships
Relationship troubleshooting
DAX
p CONCEPT
Power BI reports
p CONCEPT
p CONCEPT
p CONCEPT
Migration to Power BI
p CONCEPT
Migration overview
Prepare to migrate
Gather requirements
Plan deployment
Create content
Deploy to Power BI
Adoption roadmap
p CONCEPT
Overview
Data culture
Executive sponsorship
Content ownership
Center of Excellence
Governance
Community of practice
User support
Implementation planning
p CONCEPT
Introduction
Usage scenarios
Tenant setup
Workspaces
Security
p CONCEPT
Microsoft's BI transformation
Establish a COE
BI solution architecture in the COE
Guidance for Power BI
Article • 09/22/2021
Here you will find the guidance and recommended practices for Power BI. Guidance will
continue to be updated and added to.
Data modeling
Guidance Description
Understand star schema and Describes star schema design and its relevance to developing
the importance for Power BI Power BI data models optimized for performance and usability.
Data reduction techniques Describes different techniques to help reduce the data loaded
for Import modeling into Import models.
DAX
Guidance Description
DAX: DIVIDE function vs divide operator Describes proper use of the DIVIDE function within
(/) DAX.
Dataflows
Guidance Description
Dataflows best practice Describes best practices for designing dataflows in Power BI.
This article provides guidance that enables developers and administrators to produce
and maintain optimized Power BI solutions. You can optimize your solution at different
architectural layers. Layers include:
Optimizing visualizations
Power BI visualizations can be dashboards, Power BI reports, or Power BI paginated
reports. Each has different architectures, and so each has their own guidance.
Dashboards
It's important to understand that Power BI maintains a cache for your dashboard tiles—
except live report tiles, and streaming tiles. If your dataset enforces dynamic row-level
security (RLS), be sure to understand performance implications as tiles will cache on a
per-user basis.
When you pin live report tiles to a dashboard, they're not served from the query cache.
Instead, they behave like reports, and make queries to v-cores on the fly.
As the name suggests, retrieving the data from the cache provides better and more
consistent performance than relying on the data source. One way to take advantage of
this functionality is to have dashboards be the first landing page for your users. Pin
often-used and highly requested visuals to the dashboards. In this way, dashboards
become a valuable "first line of defense", which delivers consistent performance with
less load on the capacity. Users can still click through to a report to analyze details.
For DirectQuery and live connection datasets, the cache is updated on a periodic basis
by querying the data source. By default, it happens every hour, though you can
configure a different frequency in the dataset settings. Each cache update will send
queries to the underlying data source to update the cache. The number of queries that
generate depends on the number of visuals pinned to dashboards that rely on the data
source. Notice that if row-level security is enabled, queries are generated for each
different security context. For example, consider there are two different roles that
categorize your users, and they have two different views of the data. During query cache
refresh, Power BI generates two sets of queries.
Power BI reports
There are several recommendations for optimizing Power BI report designs.
7 Note
When reports are based on a DirectQuery dataset, for additional report design
optimizations, see DirectQuery model guidance in Power BI Desktop (Optimize
report designs).
A common mistake is to leave the default view of the table unfiltered—that is, all
100M+ rows. The data for these rows loads into memory and is uncompressed at every
refresh. This processing creates huge demands for memory. The solution: use the "Top
N" filter to reduce the max number of items that the table displays. You can set the max
item to larger than what users would need, for example, 10,000. The result is the end-
user experience doesn't change, but memory use drops greatly. And most importantly,
performance improves.
A similar design approach to the above is suggested for every visual in your report. Ask
yourself, is all the data in this visual needed? Are there ways to filter the amount of data
shown in the visual with minimal impact to the end-user experience? Remember, tables
in particular can be expensive.
Also, ensure your capacity has sufficient memory allocated to the paginated reports
workload.
Capacity settings
When using capacities—available with Power BI Premium (P SKUs), Premium Per User
(PPU) licenses, or Power BI Embedded (A SKUs, A4-A6)—you can manage capacity
settings. For more information, see Managing Premium capacities.
Gateway sizing
A gateway is required whenever Power BI must access data that isn't accessible directly
over the Internet. You can install the on-premises data gateway on a server on-premises,
or VM-hosted Infrastructure-as-a-Service (IaaS).
Network latency
Network latency can impact report performance by increasing the time required for
requests to reach the Power BI service, and for responses to be delivered. Tenants in
Power BI are assigned to a specific region.
Tip
When users from a tenant access the Power BI service, their requests always route to this
region. As requests reach the Power BI service, the service may then send additional
requests—for example, to the underlying data source, or a data gateway—which are
also subject to network latency.
Tools such as Azure Speed Test provide an indication of network latency between the
client and the Azure region. In general, to minimize the impact of network latency, strive
to keep data sources, gateways, and your Power BI capacity as close as possible.
Preferably, they reside within the same region. If network latency is an issue, try locating
gateways and data sources closer to your Power BI capacity by placing them inside
cloud-hosted virtual machines.
Monitoring performance
You can monitor performance to identify bottlenecks. Slow queries—or report visuals—
should be a focal point of continued optimization. Monitoring can be done at design
time in Power BI Desktop, or on production workloads in Power BI Premium capacities.
For more information, see Monitoring report performance in Power BI.
Next steps
For more information about this article, check out the following resources:
Power BI guidance
Monitoring report performance
Power BI adoption roadmap
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Query folding guidance in Power BI
Desktop
Article • 06/06/2023
This article targets data modelers developing models in Power BI Desktop. It provides
best practice guidance on when—and how—you can achieve Power Query query
folding.
Query folding is the ability for a Power Query query to generate a single query
statement that retrieves and transforms source data. For more information, see Power
Query query folding.
Guidance
Query folding guidance differs based on the model mode.
For a DirectQuery or Dual storage mode table, the Power Query query must achieve
query folding.
For an Import table, it may be possible to achieve query folding. When the query is
based on a relational source—and if a single SELECT statement can be constructed—you
achieve best data refresh performance by ensuring that query folding occurs. If the
Power Query mashup engine is still required to process transformations, you should
strive to minimize the work it needs to do, especially for large datasets.
Delegate as much processing to the data source as possible: When all steps of a
Power Query query can't be folded, discover the step that prevents query folding.
When possible, move later steps earlier in sequence so they may be factored into
the query folding. Note the Power Query mashup engine may be smart enough to
reorder your query steps when it generates the source query.
For a relational data source, if the step that prevents query folding could be
achieved in a single SELECT statement—or within the procedural logic of a stored
procedure—consider using a native SQL query, as described next.
Use a native SQL query: When a Power Query query retrieves data from a
relational source, it's possible for some sources to use a native SQL query. The
query can in fact be any valid statement, including a stored procedure execution. If
the statement produces multiple result sets, only the first will be returned.
Parameters can be declared in the statement, and we recommend that you use the
Value.NativeQuery M function. This function was designed to safely and
conveniently pass parameter values. It's important to understand that the Power
Query mashup engine can't fold later query steps, and so you should include all—
or as much—transformation logic in the native query statement.
There are two important considerations you need to bear in mind when using
native SQL queries:
For a DirectQuery model table, the query must be a SELECT statement, and it
can't use Common Table Expressions (CTEs) or a stored procedure.
Incremental refresh can't use a native SQL query. So, it would force the Power
Query mashup engine to retrieve all source rows, and then apply filters to
determine incremental changes.
) Important
A native SQL query can potentially do more than retrieve data. Any valid
statement can be executed (and possibly multiple times), including one that
modifies or deletes data. It's important that you apply the principle of least
privilege to ensure that the account used to access the database has only read
permission on required data.
Prepare and transform data in the source: When you identify that certain Power
Query query steps can't be folded, it may be possible to apply the transformations
in the data source. The transformations could be achieved by writing a database
view that logically transforms source data. Or, by physically preparing and
materializing data, in advance of Power BI querying it. A relational data warehouse
is an excellent example of prepared data, usually consisting of pre-integrated
sources of organizational data.
Next steps
For more information about this article, check out the following resources:
This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance when defining Power Query queries that reference other queries.
Let's be clear about what this means: When a query references a second query, it's as
though the steps in the second query are combined with, and run before, the steps in the
first query.
Consider several queries: Query1 sources data from a web service, and its load is
disabled. Query2, Query3, and Query4 all reference Query1, and their outputs are
loaded to the data model.
When the data model is refreshed, it's often assumed that Power Query retrieves the
Query1 result, and that it's reused by referenced queries. This thinking is incorrect. In
fact, Power Query executes Query2, Query3, and Query4 separately.
You can think that Query2 has the Query1 steps embedded into it. It's the case for
Query3 and Query4, too. The following diagram presents a clearer picture of how the
queries are executed.
Query1 is executed three times. The multiple executions can result in slow data refresh,
and negatively impact on the data source.
The use of the Table.Buffer function in Query1 won't eliminate the additional data
retrieval. This function buffers a table to memory. And, the buffered table can only be
used within the same query execution. So, in the example, if Query1 is buffered when
Query2 is executed, the buffered data couldn't be used when Query3 and Query4 are
executed. They'll themselves buffer the data twice more. (This result could in fact
compound the negative performance, because the table will be buffered by each
referencing query.)
7 Note
Power Query caching architecture is complex, and it's not the focus of this article.
Power Query can cache data retrieved from a data source. However, when it
executes a query, it may retrieve the data from the data source more than once.
Recommendations
Generally, we recommend you reference queries to avoid the duplication of logic across
your queries. However, as described in this article, this design approach can contribute
to slow data refreshes, and overburden data sources.
We recommend you create a dataflow instead. Using a dataflow can improve data
refresh time, and reduce impact on your data sources.
You can design the dataflow to encapsulate the source data and transformations. As the
dataflow is a persisted store of data in the Power BI service, its data retrieval is fast. So,
even when referencing queries result in multiple requests for the dataflow, data refresh
times can be improved.
In the example, if Query1 is redesigned as a dataflow entity, Query2, Query3, and
Query4 can use it as a data source. With this design, the entity sourced by Query1 will
be evaluated only once.
Next steps
For more information related to this article, check out the following resources:
This article targets Import data modelers working with Power BI Desktop.
By default, when Power Query imports data, it also caches up to 1000 rows of preview
data for each query. Preview data helps to present you with a quick preview of source
data, and of transformation results for each step of your queries. It's stored separately
on-disk and not inside the Power BI Desktop file.
However, when your Power BI Desktop file contains many queries, retrieving and storing
preview data can extend the time it takes to complete a refresh.
Recommendation
You'll achieve a faster refresh by setting the Power BI Desktop file to update the preview
cache in the background. In Power BI Desktop, you enable it by selecting File > Options
and settings > Options, and then selecting the Data Load page. You can then turn on the
Allow data preview to download in the background option. Note this option can only
be set for the current file.
Enabling background refresh can result in preview data becoming out of date. If it
occurs, the Power Query Editor will notify you with the following warning:
It's always possible to update the preview cache. You can update it for a single query, or
for all queries by using the Refresh Preview command. You'll find it on the Home ribbon
of the Power Query Editor window.
Next steps
For more information related to this article, check out the following resources:
Tip
You can also try Dataflow Gen2 in Data Factory in Microsoft Fabric, an all-in-one
analytics solution for enterprises. Microsoft Fabric covers everything from data
movement to data science, real-time analytics, business intelligence, and reporting.
Learn how to start a new trial for free.
As data volume continues to grow, so does the challenge of wrangling that data into
well-formed, actionable information. We want data that’s ready for analytics, to populate
visuals, reports, and dashboards, so we can quickly turn our volumes of data into
actionable insights. With self-service data prep for big data in Power BI, you can go from
data to Power BI insights with just a few actions.
Persist data in your own Azure Data Lake Gen 2 storage, enabling you to expose it
to other Azure services outside Power BI.
Create a single source of truth, curated from raw data using industry standard
definitions, which can work with other services and products in the Power Platform.
Encourage uptake by removing analysts' access to underlying data sources.
If you want to work with large data volumes and perform ETL at scale, dataflows
with Power BI Premium scales more efficiently and gives you more flexibility.
Dataflows supports a wide range of cloud and on-premises sources.
You can use Power BI Desktop and the Power BI service with dataflows to create
datasets, reports, dashboards, and apps that use the Common Data Model. From these
resources, you can gain deep insights into your business activities. Dataflow refresh
scheduling is managed directly from the workspace in which your dataflow was created,
just like your datasets.
7 Note
Dataflows may not be available in the Power BI service for all U.S. Government DoD
customers. For more information about which features are available, and which are
not, see Power BI feature availability for U.S. Government customers.
Next steps
This article provided an overview of self-service data prep for big data in Power BI, and
the many ways you can use it.
The following articles provide more information about dataflows and Power BI:
Creating a dataflow
Configure and consume a dataflow
Configuring Dataflow storage to use Azure Data Lake Gen 2
Premium features of dataflows
AI with dataflows
Dataflows considerations and limitations
Dataflows best practices
Power BI usage scenarios: Self-service data preparation
For more information about the Common Data Model, you can read its overview article:
This article targets Power BI Desktop data modelers. It describes star schema design and
its relevance to developing Power BI data models optimized for performance and
usability.
This article isn't intended to provide a complete discussion on star schema design. For
more details, refer directly to published content, like The Data Warehouse Toolkit: The
Definitive Guide to Dimensional Modeling (3rd edition, 2013) by Ralph Kimball et al.
Dimension tables describe business entities—the things you model. Entities can include
products, people, places, and concepts including time itself. The most consistent table
you'll find in a star schema is a date dimension table. A dimension table contains a key
column (or columns) that acts as a unique identifier, and descriptive columns.
Fact tables store observations or events, and can be sales orders, stock balances,
exchange rates, temperatures, etc. A fact table contains dimension key columns that
relate to dimension tables, and numeric measure columns. The dimension key columns
determine the dimensionality of a fact table, while the dimension key values determine
the granularity of a fact table. For example, consider a fact table designed to store sale
targets that has two dimension key columns Date and ProductKey. It's easy to
understand that the table has two dimensions. The granularity, however, can't be
determined without considering the dimension key values. In this example, consider that
the values stored in the Date column are the first day of each month. In this case, the
granularity is at month-product level.
Generally, dimension tables contain a relatively small number of rows. Fact tables, on the
other hand, can contain a very large number of rows and continue to grow over time.
Normalization vs. denormalization
To understand some star schema concepts described in this article, it's important to
know two terms: normalization and denormalization.
Normalization is the term used to describe data that's stored in a way that reduces
repetitious data. Consider a table of products that has a unique key value column, like
the product key, and additional columns describing product characteristics, including
product name, category, color, and size. A sales table is considered normalized when it
stores only keys, like the product key. In the following image, notice that only the
ProductKey column records the product.
If, however, the sales table stores product details beyond the key, it's considered
denormalized. In the following image, notice that the ProductKey and other product-
related columns record the product.
When you source data from an export file or data extract, it's likely that it represents a
denormalized set of data. In this case, use Power Query to transform and shape the
source data into multiple normalized tables.
As described in this article, you should strive to develop optimized Power BI data
models with tables that represent normalized fact and dimension data. However, there's
one exception where a snowflake dimension should be denormalized to produce a
single model table.
Consider that each Power BI report visual generates a query that is sent to the Power BI
model (which the Power BI service calls a dataset). These queries are used to filter,
group, and summarize model data. A well-designed model, then, is one that provides
tables for filtering and grouping, and tables for summarizing. This design fits well with
star schema principles:
There's no table property that modelers set to configure the table type as dimension or
fact. It's in fact determined by the model relationships. A model relationship establishes
a filter propagation path between two tables, and it's the Cardinality property of the
relationship that determines the table type. A common relationship cardinality is one-to-
many or its inverse many-to-one. The "one" side is always a dimension-type table while
the "many" side is always a fact-type table. For more information about relationships,
see Model relationships in Power BI Desktop.
A well-structured model design should include tables that are either dimension-type
tables or fact-type tables. Avoid mixing the two types together for a single table. We
also recommend that you should strive to deliver the right number of tables with the
right relationships in place. It's also important that fact-type tables always load data at a
consistent grain.
Lastly, it's important to understand that optimal model design is part science and part
art. Sometimes you can break with good guidance when it makes sense to do so.
There are many additional concepts related to star schema design that can be applied to
a Power BI model. These concepts include:
Measures
Surrogate keys
Snowflake dimensions
Role-playing dimensions
Slowly changing dimensions
Junk dimensions
Degenerate dimensions
Factless fact tables
Measures
In star schema design, a measure is a fact table column that stores values to be
summarized.
It's important to understand that Power BI models support a second method for
achieving summarization. Any column—and typically numeric columns—can be
summarized by a report visual or Q&A. These columns are referred to as implicit
measures. They offer a convenience for you as a model developer, as in many instances
you do not need to create measures. For example, the Adventure Works reseller sales
Sales Amount column could be summarized in numerous ways (sum, count, average,
median, min, max, etc.), without the need to create a measure for each possible
aggregation type.
However, there are three compelling reasons for you to create measures, even for
simple column-level summarizations:
When you know your report authors will query the model by using
Multidimensional Expressions (MDX), the model must include explicit measures.
Explicit measures are defined by using DAX. This design approach is highly relevant
when a Power BI dataset is queried by using MDX, because MDX can't achieve
summarization of column values. Notably, MDX will be used when performing
Analyze in Excel, because PivotTables issue MDX queries.
When you know your report authors will create Power BI paginated reports using
the MDX query designer, the model must include explicit measures. Only the MDX
query designer supports server aggregates. So, if report authors need to have
measures evaluated by Power BI (instead of by the paginated report engine), they
must use the MDX query designer.
When you need to ensure that your report authors can only summarize columns in
specific ways. For example, the reseller sales Unit Price column (which represents a
per unit rate) can be summarized, but only by using specific aggregation functions.
It should never be summed, but it's appropriate to summarize by using other
aggregation functions like min, max, average, etc. In this instance, the modeler can
hide the Unit Price column, and create measures for all appropriate aggregation
functions.
This design approach works well for reports authored in the Power BI service and for
Q&A. However, Power BI Desktop live connections allow report authors to show hidden
fields in the Fields pane, which can result in circumventing this design approach.
Surrogate keys
A surrogate key is a unique identifier that you add to a table to support star schema
modeling. By definition, it's not defined or stored in the source data. Commonly,
surrogate keys are added to relational data warehouse dimension tables to provide a
unique identifier for each dimension table row.
Power BI model relationships are based on a single unique column in one table, which
propagates filters to a single column in a different table. When a dimension-type table
in your model doesn't include a single unique column, you must add a unique identifier
to become the "one" side of a relationship. In Power BI Desktop, you can easily achieve
this requirement by creating a Power Query index column.
You must merge this query with the "many"-side query so that you can add the index
column to it also. When you load these queries to the model, you can then create a one-
to-many relationship between the model tables.
Snowflake dimensions
A snowflake dimension is a set of normalized tables for a single business entity. For
example, Adventure Works classifies products by category and subcategory. Products
are assigned to subcategories, and subcategories are in turn assigned to categories. In
the Adventure Works relational data warehouse, the product dimension is normalized
and stored in three related tables: DimProductCategory, DimProductSubcategory, and
DimProduct.
If you use your imagination, you can picture the normalized tables positioned outwards
from the fact table, forming a snowflake design.
In Power BI Desktop, you can choose to mimic a snowflake dimension design (perhaps
because your source data does) or integrate (denormalize) the source tables into a
single model table. Generally, the benefits of a single model table outweigh the benefits
of multiple model tables. The most optimal decision can depend on the volumes of data
and the usability requirements for the model.
Power BI loads more tables, which is less efficient from storage and performance
perspectives. These tables must include columns to support model relationships,
and it can result in a larger model size.
Longer relationship filter propagation chains will need to be traversed, which will
likely be less efficient than filters applied to a single table.
The Fields pane presents more model tables to report authors, which can result in
a less intuitive experience, especially when snowflake dimension tables contain just
one or two columns.
It's not possible to create a hierarchy that spans the tables.
When you choose to integrate into a single model table, you can also define a hierarchy
that encompasses the highest and lowest grain of the dimension. Possibly, the storage
of redundant denormalized data can result in increased model storage size, particularly
for very large dimension tables.
Star schema design theory refers to two common SCD types: Type 1 and Type 2. A
dimension-type table could be Type 1 or Type 2, or support both types simultaneously
for different columns.
Type 1 SCD
A Type 1 SCD always reflects the latest values, and when changes in source data are
detected, the dimension table data is overwritten. This design approach is common for
columns that store supplementary values, like the email address or phone number of a
customer. When a customer email address or phone number changes, the dimension
table updates the customer row with the new values. It's as if the customer always had
this contact information.
Type 2 SCD
A Type 2 SCD supports versioning of dimension members. If the source system doesn't
store versions, then it's usually the data warehouse load process that detects changes,
and appropriately manages the change in a dimension table. In this case, the dimension
table must use a surrogate key to provide a unique reference to a version of the
dimension member. It also includes columns that define the date range validity of the
version (for example, StartDate and EndDate) and possibly a flag column (for example,
IsCurrent) to easily filter by current dimension members.
It's important to understand that when the source data doesn't store versions, you must
use an intermediate system (like a data warehouse) to detect and store changes. The
table load process must preserve existing data and detect changes. When a change is
detected, the table load process must expire the current version. It records these
changes by updating the EndDate value and inserting a new version with the StartDate
value commencing from the previous EndDate value. Also, related facts must use a
time-based lookup to retrieve the dimension key value relevant to the fact date. A
Power BI model using Power Query can't produce this result. It can, however, load data
from a pre-loaded SCD Type 2 dimension table.
The Power BI model should support querying historical data for a member, regardless of
change, and for a version of the member, which represents a particular state of the
member in time. In the context of Adventure Works, this design enables you to query
the salesperson regardless of assigned sales region, or for a particular version of the
salesperson.
To achieve this requirement, the Power BI model dimension-type table must include a
column for filtering the salesperson, and a different column for filtering a specific
version of the salesperson. It's important that the version column provides a non-
ambiguous description, like "Michael Blythe (12/15/2008-06/26/2019)" or "Michael
Blythe (current)". It's also important to educate report authors and consumers about the
basics of SCD Type 2, and how to achieve appropriate report designs by applying
correct filters.
It's also a good design practice to include a hierarchy that allows visuals to drill down to
the version level.
Role-playing dimensions
A role-playing dimension is a dimension that can filter related facts differently. For
example, at Adventure Works, the date dimension table has three relationships to the
reseller sales facts. The same dimension table can be used to filter the facts by order
date, ship date, or delivery date.
In a data warehouse, the accepted design approach is to define a single date dimension
table. At query time, the "role" of the date dimension is established by which fact
column you use to join the tables. For example, when you analyze sales by order date,
the table join relates to the reseller sales order date column.
The only way to use an inactive relationship is to define a DAX expression that uses the
USERELATIONSHIP function. In our example, the model developer must create measures
to enable analysis of reseller sales by ship date and delivery date. This work can be
tedious, especially when the reseller table defines many measures. It also creates Fields
pane clutter, with an overabundance of measures. There are other limitations, too:
When report authors rely on summarizing columns, rather than defining measures,
they can't achieve summarization for the inactive relationships without writing a
report-level measure. Report-level measures can only be defined when authoring
reports in Power BI Desktop.
With only one active relationship path between date and reseller sales, it's not
possible to simultaneously filter reseller sales by different types of dates. For
example, you can't produce a visual that plots order date sales by shipped sales.
This design approach doesn't require you to define multiple measures for different date
roles, and it allows simultaneous filtering by different date roles. A minor price to pay,
however, with this design approach is that there will be duplication of the date
dimension table resulting in an increased model storage size. As dimension-type tables
typically store fewer rows relative to fact-type tables, it is rarely a concern.
Observe the following good design practices when you create model dimension-type
tables for each role:
Ensure that the column names are self-describing. While it's possible to have a
Year column in all date tables (column names are unique within their table), it's not
self-describing by default visual titles. Consider renaming columns in each
dimension role table, so that the Ship Date table has a year column named Ship
Year, etc.
When relevant, ensure that table descriptions provide feedback to report authors
(through Fields pane tooltips) about how filter propagation is configured. This
clarity is important when the model contains a generically named table, like Date,
which is used to filter many fact-type tables. In the case that this table has, for
example, an active relationship to the reseller sales order date column, consider
providing a table description like "Filters reseller sales by order date".
Junk dimensions
A junk dimension is useful when there are many dimensions, especially consisting of
few attributes (perhaps one), and when these attributes have few values. Good
candidates include order status columns, or customer demographic columns (gender,
age group, etc.).
The design objective of a junk dimension is to consolidate many "small" dimensions into
a single dimension to both reduce the model storage size and also reduce Fields pane
clutter by surfacing fewer model tables.
A junk dimension table is typically the Cartesian product of all dimension attribute
members, with a surrogate key column. The surrogate key provides a unique reference
to each row in the table. You can build the dimension in a data warehouse, or by using
Power Query to create a query that performs full outer query joins, then adds a
surrogate key (index column).
You load this query to the model as a dimension-type table. You also need to merge this
query with the fact query, so the index column is loaded to the model to support the
creation of a "one-to-many" model relationship.
Degenerate dimensions
A degenerate dimension refers to an attribute of the fact table that is required for
filtering. At Adventure Works, the reseller sales order number is a good example. In this
case, it doesn't make good model design sense to create an independent table
consisting of just this one column, because it would increase the model storage size and
result in Fields pane clutter.
In the Power BI model, it can be appropriate to add the sales order number column to
the fact-type table to allow filtering or grouping by sales order number. It is an
exception to the formerly introduced rule that you should not mix table types (generally,
model tables should be either dimension-type or fact-type).
However, if the Adventure Works resellers sales table has order number and order line
number columns, and they're required for filtering, a degenerate dimension table would
be a good design. For more information, see One-to-one relationship guidance
(Degenerate dimensions).
A factless fact table could store observations defined by dimension keys. For example, at
a particular date and time, a particular customer logged into your web site. You could
define a measure to count the rows of the factless fact table to perform analysis of when
and how many customers have logged in.
Next steps
For more information about star schema design or Power BI model design, see the
following articles:
This article targets Power BI Desktop data modelers developing Import models. It
describes different techniques to help reduce the data loaded into Import models.
Import models are loaded with data that is compressed and optimized and then stored
to disk by the VertiPaq storage engine. When source data is loaded into memory, it is
possible to see 10x compression, and so it is reasonable to expect that 10 GB of source
data can compress to about 1 GB in size. Further, when persisted to disk an additional
20% reduction can be achieved.
Despite the efficiencies achieved by the VertiPaq storage engine, it is important that you
strive to minimize the data that is to be loaded into your models. It is especially true for
large models, or models that you anticipate will grow to become large over time. Four
compelling reasons include:
Larger model sizes may not be supported by your capacity. Shared capacity can
host models up to 1 GB in size, while Premium capacities can host larger models
depending on the SKU. For further information, read the Power BI Premium
support for large datasets article.
Smaller model sizes reduce contention for capacity resources, in particular
memory. It allows more models to be concurrently loaded for longer periods of
time, resulting in lower eviction rates.
Smaller models achieve faster data refresh, resulting in lower latency reporting,
higher dataset refresh throughput, and less pressure on source system and
capacity resources.
Smaller table row counts can result in faster calculation evaluations, which can
deliver better overall query performance.
There are eight different data reduction techniques covered in this article. These
techniques include:
Reporting, to achieve report designs that appropriate filter, group, and summarize
model data
Model structure, by supporting model relationships, model calculations, security
roles, and even data color formatting
Columns that don't serve these purposes can probably be removed. Removing columns
is referred to as vertical filtering.
We recommend that you design models with exactly the right number of columns based
on the known reporting requirements. Your requirements may change over time, but
bear in mind that it's easier to add columns later than it is to remove them later.
Removing columns can break reports or the model structure.
Filtering by entity involves loading a subset of source data into the model. For example,
instead of loading sales facts for all sales regions, only load facts for a single region. This
design approach will result in many smaller models, and it can also eliminate the need
to define row-level security (but will require granting specific dataset permissions in the
Power BI service, and creating "duplicate" reports that connect to each dataset). You can
leverage the use of Power Query parameters and Power BI Template files to simplify
management and publication. For further information, read the Deep Dive into Query
Parameters and Power BI Templates blog entry.
Filtering by time involves limiting the amount of data history loaded into fact-type
tables (and limiting the date rows loaded into the model date tables). We suggest you
don't automatically load all available history, unless it is a known reporting requirement.
It is helpful to understand that time-based Power Query filters can be parameterized,
and even set to use relative time periods (relative to the refresh date, for example, the
past five years). Also, bear in mind that retrospective changes to time filters will not
break reports; it will just result in less (or more) data history available in reports.
Group by and summarize
Perhaps the most effective technique to reduce a model size is to load pre-summarized
data. This technique can be used to raise the grain of fact-type tables. There is a distinct
trade-off, however, resulting in loss of detail.
For example, a source sales fact table stores one row per order line. Significant data
reduction could be achieved by summarizing all sales metrics, grouping by date,
customer, and product. Consider, then, that an even more significant data reduction
could be achieved by grouping by date at month level. It could achieve a possible 99%
reduction in model size, but reporting at day level—or individual order level—is no
longer possible. Deciding to summarize fact-type data always involves tradeoffs.
Tradeoff could be mitigated by a Mixed model design, and this option is described in
the Switch to Mixed mode technique.
In some specific instances, you can convert source text data into numeric values. For
example, a sales order number may be consistently prefixed by a text value (e.g.
"SO123456"). The prefix could be removed, and the order number value converted to
whole number. For large tables, it can result in significant data reduction, especially
when the column contains unique or high cardinality values.
In this example, we recommend that you set the column Default Summarization
property to "Do Not Summarize". It will help to minimize the inappropriate
summarization of the order number values.
Preference should be creating custom columns in Power Query. When the source is a
database, you can achieve greater load efficiency in two ways. The calculation can be
defined in the SQL statement (using the native query language of the provider), or it can
be materialized as a column in the data source.
However, in some instances, model calculated columns may be the better choice. It can
be the case when the formula involves evaluating measures, or it requires specific
modeling functionality only supported in DAX functions. For information on one such
example, refer to the Understanding functions for parent-child hierarchies in DAX article.
An effective technique to reduce the model size is to set the Storage Mode property for
larger fact-type tables to DirectQuery. Consider that this design approach could work
well in conjunction with the Group by and summarize technique introduced earlier. For
example, summarized sales data could be used to achieve high performance "summary"
reporting. A drill through page could display granular sales for specific (and narrow)
filter context, displaying all in-context sales orders. In this example, the drillthrough
page would include visuals based on a DirectQuery table to retrieve the sales order data.
There are, however, many security and performance implications related to Composite
models. For further information, read the Use composite models in Power BI Desktop
article.
Next steps
For more information about Power BI Import model design, see the following articles:
This article targets you as a data modeler working with Power BI Desktop. It describes
good design practices for creating date tables in your data models.
To work with Data Analysis Expressions (DAX) time intelligence functions, there's a
prerequisite model requirement: You must have at least one date table in your model. A
date table is a table that meets the following requirements:
" It must have a column of data type date (or date/time)—known as the date column.
" The date column must contain unique values.
" The date column must not contain BLANKs.
" The date column must not have any missing dates.
" The date column must span full years. A year isn't necessarily a calendar year
(January-December).
" The date table must be marked as a date table.
You can use any of several techniques to add a date table to your model:
Tip
A date table is perhaps the most consistent feature you'll add to any of your
models. What's more, within an organization a date table should be consistently
defined. So, whatever technique you decide to use, we recommend you create a
Power BI Desktop template that includes a fully configured date table. Share the
template with all modelers in your organization. So, whenever someone develops a
new model, they can begin with a consistently defined date table.
If you're developing a DirectQuery model and your data source doesn't include a date
table, we strongly recommend you add a date table to the data source. It should meet
all the modeling requirements of a date table. You can then use Power Query to connect
to the date table. This way, your model calculations can leverage the DAX time
intelligence capabilities.
Tip
If you don't have a data warehouse or other consistent definition for time in your
organization, consider using Power Query to publish a dataflow. Then, have all data
modelers connect to the dataflow to add date tables to their models. The dataflow
becomes the single source of truth for time in your organization.
If you need to generate a date table, consider doing it with DAX. You might find it's
easier. What's more, it's likely to be more convenient, because DAX includes some built-
in intelligence to simplify creating and managing date tables.
Use the CALENDAR function when you want to define a date range. You pass in
two values: the start date and end date. These values can be defined by other DAX
functions, like MIN(Sales[OrderDate]) or MAX(Sales[OrderDate]) .
Use the CALENDARAUTO function when you want the date range to automatically
encompass all dates stored in the model. You can pass in a single optional
parameter that's the end month of the year (if your year is a calendar year, which
ends in December, you don't need to pass in a value). It's a helpful function,
because it ensures that full years of dates are returned—it's a requirement for a
marked date table. What's more, you don't need to manage extending the table to
future years: When a data refresh completes, it triggers the recalculation of the
table. A recalculation will automatically extend the table's date range when dates
for a new year are loaded into the model.
Tip
Next steps
For more information related to this article, check out the following resources:
This article targets data modelers developing Import or Composite models in Power BI
Desktop. It provides guidance, recommendations, and considerations when using Power
BI Desktop Auto date/time in specific situations. For an overview and general
introduction to Auto date/time, see Auto date/time in Power BI Desktop.
The Auto date/time option delivers convenient, fast, and easy-to-use time intelligence.
Reports authors can work with time intelligence when filtering, grouping, and drilling
down through calendar time periods.
Considerations
The following bulleted list describes considerations—and possible limitations—related
to the Auto date/time option.
Applies to all or none: When the Auto date/time option is enabled, it applies to all
date columns in Import tables that aren't the "many" side of a relationship. It can't
be selectively enabled or disabled on a column-by-column basis.
Calendar periods only: The year and quarter columns relate to calendar periods. It
means that the year begins on January 1 and finishes on December 31. There's no
ability to customize the year commencement (or completion) date.
Customization: It's not possible to customize the values used to describe time
periods. Further, it's not possible to add additional columns to describe other time
periods, for example, weeks.
Year filtering: The Quarter, Month, and Day column values don't include the year
value. For example, the Month column contains the month names only (that is,
January, February, etc.). The values are not fully self-describing, and in some report
designs may not communicate the year filter context.
That's why it's important that filters or grouping must take place on the Year
column. When drilling down by using the hierarchy year will be filtered, unless the
Year level is intentionally removed. If there's no filter or group by year, a grouping
by month, for example, would summarize values across all years for that month.
Single table date filtering: Because each date column produces its own (hidden)
auto date/time table, it's not possible to apply a time filter to one table and have it
propagate to multiple model tables. Filtering in this way is a common modeling
requirement when reporting on multiple subjects (fact-type tables) like sales and
sales budget. When using auto date/time, the report author will need to apply
filters to each different date column.
Model size: For each date column that generates a hidden auto date/time table, it
will result in an increased model size and also extend the data refresh time.
Other reporting tools: It isn't possible to work with auto date/time tables when:
Using Analyze in Excel.
Using Power BI paginated report Analysis Services query designers.
Connecting to the model using non-Power BI report designers.
Recommendations
We recommended that you keep the Auto date/time option enabled only when you
work with calendar time periods, and when you have simplistic model requirements in
relation to time. Using this option can also be convenient when creating ad hoc models
or performing data exploration or profiling.
When your data source already defines a date dimension table, this table should be
used to consistently define time within your organization. It will certainly be the case if
your data source is a data warehouse. Otherwise, you can generate date tables in your
model by using the DAX CALENDAR or CALENDARAUTO functions. You can then add
calculated columns to support the known time filtering and grouping requirements. This
design approach may allow you to create a single date table that propagates to all fact-
type tables, possibly resulting a single table to apply time filters. For further information
on creating date tables, read the Set and use date tables in Power BI Desktop article.
Tip
If the Auto date/time option isn't relevant to your projects, we recommend that you
disable the global Auto date/time option. It will ensure that all new Power BI Desktop
files you create won't enable the Auto date/time option.
Next steps
For more information related to this article, check out the following resources:
This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on working with one-to-one model relationships. A one-to-one
relationship can be created when both tables each contain a column of common and
unique values.
7 Note
It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.
Row data spans across tables: A single business entity or subject is loaded as two
(or more) model tables, possibly because their data is sourced from different data
stores. This scenario can be common for dimension-type tables. For example,
master product details are stored in an operational sales system, and
supplementary product details are stored in a different source.
It's unusual, however, that you'd relate two fact-type tables with a one-to-one
relationship. It's because both fact-type tables would need to have the same
dimensionality and granularity. Also, each fact-type table would need unique
columns to allow the model relationship to be created.
Degenerate dimensions
When columns from a fact-type table are used for filtering or grouping, you can
consider making them available in a separate table. This way, you separate columns
used for filter or grouping, from those columns used to summarize fact rows. This
separation can:
Reduce storage space
Simplify model calculations
Contribute to improved query performance
Deliver a more intuitive Fields pane experience to your report authors
Consider a source sales table that stores sales order details in two columns.
The OrderNumber column stores the order number, and the OrderLineNumber column
stores a sequence of lines within the order.
In the following model diagram, notice that the order number and order line number
columns haven't been loaded to the Sales table. Instead, their values were used to
create a surrogate key column named SalesOrderLineID. (The key value is calculated by
multiplying the order number by 1000, and then adding the order line number.)
The Sales Order table provides a rich experience for report authors with three columns:
Sales Order, Sales Order Line, and Line Number. It also includes a hierarchy. These table
resources support report designs that need to filter, group by, or drill down through
orders and order lines.
As the Sales Order table is derived from the sales data, there should be exactly the same
number of rows in each table. Further, there should be matching values between each
SalesOrderLineID column.
The first table is named Product, and it contains three columns: Color, Product, and
SKU. The second table is named Product Category, and it contains two columns:
Category, and SKU. A one-to-one relationship relates the two SKU columns. The
relationship filters in both directions, which is always the case for one-to-one
relationships.
To help describe how the relationship filter propagation works, the model diagram has
been modified to reveal the table rows. All examples in this article are based on this
data.
7 Note
It's not possible to display table rows in the Power BI Desktop model diagram. It's
done in this article to support the discussion with clear examples.
The row details for the two tables are described in the following bulleted list:
Notice that the Product Category table doesn't include a row for the product SKU CL-
02. We'll discuss the consequences of this missing row later in this article.
In the Fields pane, report authors will find product-related fields in two tables: Product
and Product Category.
Let's see what happens when fields from both tables are added to a table visual. In this
example, the SKU column is sourced from the Product table.
Notice that the Category value for product SKU CL-02 is BLANK. It's because there's no
row in the Product Category table for this product.
Recommendations
When possible, we recommend you avoid creating one-to-one model relationships
when row data spans across model tables. It's because this design can:
The following steps present a methodology to consolidate and model the one-to-one
related data:
1. Merge queries: When combining the two queries, give consideration to the
completeness of data in each query. If one query contains a complete set of rows
(like a master list), merge the other query with it. Configure the merge
transformation to use a left outer join, which is the default join type. This join type
ensures you'll keep all rows of the first query, and supplement them with any
matching rows of the second query. Expand all required columns of the second
query into the first query.
2. Disable query load: Be sure to disable the load of the second query. This way, it
won't load its result as a model table. This configuration reduces the data model
storage size, and helps to unclutter the Fields pane.
In our example, report authors now find a single table named Product in the Fields
pane. It contains all product-related fields.
3. Replace missing values: If the second query has unmatched rows, NULLs will
appear in the columns introduced from it. When appropriate, consider replacing
NULLs with a token value. Replacing missing values is especially important when
report authors filter or group by the column values, as BLANKs could appear in
report visuals.
In the following table visual, notice that the category for product SKU CL-02 now
reads [Undefined]. In the query, null categories were replaced with this token text
value.
In our example, report authors now can use a hierarchy that has two levels:
Category and Product.
If you like how separate tables help organize your fields, we still recommend
consolidating into a single table. You can still organize your fields, but by using display
folders instead.
In our example, report authors can find the Category field within the Marketing display
folder.
Should you still decide to define one-to-one intra source group relationships in your
model, when possible, ensure there are matching rows in the related tables. As a one-
to-one intra source group relationship is evaluated as a regular relationship, data
integrity issues could surface in your report visuals as BLANKs. (You can see an example
of a BLANK grouping in the first table visual presented in this article.)
Let's see what happens when fields from both tables are added to a table visual, and a
limited relationship exists between the tables.
The table displays two rows only. Product SKU CL-02 is missing because there's no
matching row in the Product Category table.
Next steps
For more information related to this article, check out the following resources:
This article targets you as a data modeler working with Power BI Desktop. It describes
three different many-to-many modeling scenarios. It also provides you with guidance on
how to successfully design for them in your models.
7 Note
It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.
There are, in fact, three many-to-many scenarios. They can occur when you're required
to:
7 Note
Modeling these entities is straight forward. One dimension-type table stores accounts,
and another dimension-type table stores customers. As is characteristic of dimension-
type tables, there's an ID column in each table. To model the relationship between the
two tables, a third table is required. This table is commonly referred to as a bridging
table. In this example, it's purpose is to store one row for each customer-account
association. Interestingly, when this table only contains ID columns, it's called a factless
fact table.
The first table is named Account, and it contains two columns: AccountID and Account.
The second table is named AccountCustomer, and it contains two columns: AccountID
and CustomerID. The third table is named Customer, and it contains two columns:
CustomerID and Customer. Relationships don't exist between any of the tables.
Two one-to-many relationships are added to relate the tables. Here's an updated model
diagram of the related tables. A fact-type table named Transaction has been added. It
records account transactions. The bridging table and all ID columns have been hidden.
To help describe how the relationship filter propagation works, the model diagram has
been modified to reveal the table rows.
7 Note
It's not possible to display table rows in the Power BI Desktop model diagram. It's
done in this article to support the discussion with clear examples.
The row details for the four tables are described in the following bulleted list:
Below are two visuals that summarize the Amount column from the Transaction table.
The first visual groups by account, and so the sum of the Amount columns represents
the account balance. The second visual groups by customer, and so the sum of the
Amount columns represents the customer balance.
The first visual is titled Account Balance, and it has two columns: Account and Amount.
It displays the following result:
The second visual is titled Customer Balance, and it has two columns: Customer and
Amount. It displays the following result:
A quick glance at the table rows and the Account Balance visual reveals that the result is
correct, for each account and the total amount. It's because each account grouping
results in a filter propagation to the Transaction table for that account.
However, something doesn't appear correct with the Customer Balance visual. Each
customer in the Customer Balance visual has the same balance as the total balance. This
result could only be correct if every customer was a joint account holder of every
account. That's not the case in this example. The issue is related to filter propagation. It's
not flowing all the way to the Transaction table.
Follow the relationship filter directions from the Customer table to the Transaction
table. It should be apparent that the relationship between the Account and
AccountCustomer table is propagating in the wrong direction. The filter direction for
this relationship must be set to Both.
As expected, there has been no change to the Account Balance visual.
The Customer Balance visuals, however, now displays the following result:
The Customer Balance visual now displays a correct result. Follow the filter directions for
yourself, and see how the customer balances were calculated. Also, understand that the
visual total means all customers.
Someone unfamiliar with the model relationships could conclude that the result is
incorrect. They might ask: Why isn't the total balance for Customer-91 and Customer-92
equal to 350 (75 + 275)?
The answer to their question lies in understanding the many-to-many relationship. Each
customer balance can represent the addition of multiple account balances, and so the
customer balances are non-additive.
Add each many-to-many related entity as a model table, ensuring it has a unique
identifier (ID) column
Add a bridging table to store associated entities
Create one-to-many relationships between the three tables
Configure one bi-directional relationship to allow filter propagation to continue to
the fact-type tables
When it isn't appropriate to have missing ID values, set the Is Nullable property of
ID columns to FALSE—data refresh will then fail if missing values are sourced
Hide the bridging table (unless it contains additional columns or measures
required for reporting)
Hide any ID columns that aren't suitable for reporting (for example, when IDs are
surrogate keys)
If it makes sense to leave an ID column visible, ensure that it's on the "one" slide of
the relationship—always hide the "many" side column. It results in the best filter
performance.
To avoid confusion or misinterpretation, communicate explanations to your report
users—you can add descriptions with text boxes or visual header tooltips
Let's consider an example that involves two fact-type tables: Order and Fulfillment. The
Order table contains one row per order line, and the Fulfillment table can contains zero
or more rows per order line. Rows in the Order table represent sales orders. Rows in the
Fulfillment table represent order items that have been shipped. A many-to-many
relationship relates the two OrderID columns, with filter propagation only from the
Order table (Order filters Fulfillment).
Let's now take a look at the table rows. In the Fulfillment table, notice that order lines
can be fulfilled by multiple shipments. (The absence of an order line means the order is
yet to be fulfilled.)
The row details for the two tables are described in the following bulleted list:
Let's see what happens when the model is queried. Here's a table visual comparing
order and fulfillment quantities by the Order table OrderID column.
The visual presents an accurate result. However, the usefulness of the model is limited—
you can only filter or group by the Order table OrderID column.
Instead of relating fact-type tables directly, we recommend you adopt Star Schema
design principles. You do it by adding dimension-type tables. The dimension-type tables
then relate to the fact-type tables by using one-to-many relationships. This design
approach is robust as it delivers flexible reporting options. It lets you filter or group
using any of the dimension-type columns, and summarize any related fact-type table.
The model now has four additional tables: OrderLine, OrderDate, Product, and
FulfillmentDate
The four additional tables are all dimension-type tables, and one-to-many
relationships relate these tables to the fact-type tables
The OrderLine table contains an OrderLineID column, which represents the
OrderID value multiplied by 100, plus the OrderLine value—a unique identifier for
each order line
The Order and Fulfillment tables now contain an OrderLineID column, and they no
longer contain the OrderID and OrderLine columns
The Fulfillment table now contains OrderDate and ProductID columns
The FulfillmentDate table relates only to the Fulfillment table
All unique identifier columns are hidden
Taking the time to apply star schema design principles delivers the following benefits:
Your report visuals can filter or group by any visible column from the dimension-
type tables
Your report visuals can summarize any visible column from the fact-type tables
Filters applied to the OrderLine, OrderDate, or Product tables will propagate to
both fact-type tables
All relationships are one-to-many, and each relationship is a regular relationship.
Data integrity issues won't be masked. For more information, see Model
relationships in Power BI Desktop (Relationship evaluation).
Let's consider an example involving four tables: Date, Sales, Product, and Target. The
Date and Product are dimension-type tables, and one-to-many relationships relate each
to the Sales fact-type table. So far, it represents a good star schema design. The Target
table, however, is yet to be related to the other tables.
The Target table contains three columns: Category, TargetQuantity, and TargetYear. The
table rows reveal a granularity of year and product category. In other words, targets—
used to measure sales performance—are set each year for each product category.
Because the Target table stores data at a higher level than the dimension-type tables, a
one-to-many relationship cannot be created. Well, it's true for just one of the
relationships. Let's explore how the Target table can be related to the dimension-type
tables.
Tip
When storing facts at a higher time granularity than day, set the column data type
to Date (or Whole number if you're using date keys). In the column, store a value
representing the first day of the time period. For example, a year period is recorded
as January 1 of the year, and a month period is recorded as the first day of that
month.
Care must be taken, however, to ensure that month or date level filters produce a
meaningful result. Without any special calculation logic, report visuals may report that
target dates are literally the first day of each year. All other days—and all months except
January—will summarize the target quantity as BLANK.
The following matrix visual shows what happens when the report user drills from a year
into its months. The visual is summarizing the TargetQuantity column. (The Show items
with no data option has been enabled for the matrix rows.)
To avoid this behavior, we recommend you control the summarization of your fact data
by using measures. One way to control the summarization is to return BLANK when
lower-level time periods are queried. Another way—defined with some sophisticated
DAX—is to apportion values across lower-level time periods.
Consider the following measure definition that uses the ISFILTERED DAX function. It only
returns a value when the Date or Month columns aren't filtered.
DAX
Target Quantity =
IF(
NOT ISFILTERED('Date'[Date])
&& NOT ISFILTERED('Date'[Month]),
SUM(Target[TargetQuantity])
)
The following matrix visual now uses the Target Quantity measure. It shows that all
monthly target quantities are BLANK.
The Category columns (from both the Product and Target tables) contains duplicate
values. So, there's no "one" for a one-to-many relationship. In this case, you'll need to
create a many-to-many relationship. The relationship should propagate filters in a single
direction, from the dimension-type table to the fact-type table.
Let's now take a look at the table rows.
In the Target table, there are four rows: two rows for each target year (2019 and 2020),
and two categories (Clothing and Accessories). In the Product table, there are three
products. Two belong to the clothing category, and one belongs to the accessories
category. One of the clothing colors is green, and the remaining two are blue.
A table visual grouping by the Category column from the Product table produces the
following result.
This visual produces the correct result. Let's now consider what happens when the Color
column from the Product table is used to group target quantity.
A filter on the Color column from the Product table results in two rows. One of the rows
is for the Clothing category, and the other is for the Accessories category. These two
category values are propagated as filters to the Target table. In other words, because
the color blue is used by products from two categories, those categories are used to
filter the targets.
Consider the following measure definition. Notice that all Product table columns that
are beneath the category level are tested for filters.
DAX
Target Quantity =
IF(
NOT ISFILTERED('Product'[ProductID])
&& NOT ISFILTERED('Product'[Product])
&& NOT ISFILTERED('Product'[Color]),
SUM(Target[TargetQuantity])
)
The following table visual now uses the Target Quantity measure. It shows that all color
target quantities are BLANK.
The final model design looks like the following.
This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on when to create active or inactive model relationships. By default,
active relationships propagate filters to other tables. Inactive relationship, however, only
propagate filters when a DAX expression activates (uses) the relationship.
7 Note
It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.
Active relationships
Generally, we recommend defining active relationships whenever possible. They widen
the scope and potential of how your model can be used by report authors, and users
working with Q&A.
While this design works well for relational star schema designs, it doesn't for Power BI
models. It's because model relationships are paths for filter propagation, and these
paths must be deterministic. For this reason, a model cannot have multiple active
relationships between two tables. Therefore—as described in this example—one
relationship is active while the other is inactive (represented by the dashed line).
Specifically, it's the relationship to the ArrivalAirport column that's active. This means
filters applied to the Airport table automatically propagate to the ArrivalAirport column
of the Flight table.
This model design imposes severe limitations on how the data can be reported.
Specifically, it's not possible to filter the Airport table to automatically isolate flight
details for a departure airport. As reporting requirements involve filtering (or grouping)
by departure and arrival airports at the same time, two active relationships are needed.
Translating this requirement into a Power BI model design means the model must have
two airport tables.
The improved model design supports producing the following report design.
The report page filters by Melbourne as the departure airport, and the table visual
groups by arrival airports.
7 Note
For Import models, the additional table has resulted in an increased model size,
and longer refresh times. As such, it contradicts the recommendations described in
the Data reduction techniques for Import modeling article. However, in the
example, the requirement to have only active relationships overrides these
recommendations.
Further, it's common that dimension-type tables contain low row counts relative to
fact-type table row counts. So, the increased model size and refresh times aren't
likely to be excessively large.
Refactoring methodology
Here's a methodology to refactor a model from a single role-playing dimension-type
table, to a design with one table per role.
3. Create a copy of the role-playing table, providing it with a name that reflects its
role. If it's an Import table, we recommend defining a calculated table. If it's a
DirectQuery table, you can duplicate the Power Query query.
In the example, the Departure Airport table was created by using the following
calculated table definition.
DAX
5. Consider renaming the columns in the tables so they accurately reflect their role. In
the example, all columns are prefixed with the word Departure or Arrival. These
names ensure report visuals, by default, will have self-describing and non-
ambiguous labels. It also improves the Q&A experience, allowing users to easily
write their questions.
Inactive relationships
In specific circumstances, inactive relationships can address special reporting needs.
Let's now consider different model and reporting requirements:
A sales model contains a Sales table that has two date columns: OrderDate and
ShipDate
Each row in the Sales table records a single order
Date filters are almost always applied to the OrderDate column, which always
stores a valid date
Only one measure requires date filter propagation to the ShipDate column, which
can contain BLANKs (until the order is shipped)
There's no requirement to simultaneously filter (or group by) order and ship date
periods
There are two model relationships between the Sales and Date tables. In the Sales table,
the OrderDate and ShipDate columns relate to the Date column of the Date table. In
this model, the two roles for the Date table are order date and ship date. It's the
relationship to the OrderDate column that's active.
All of the six measures—except one—must filter by the OrderDate column. The Orders
Shipped measure, however, must filter by the ShipDate column.
Here's the Orders measure definition. It simply counts the rows of the Sales table within
the filter context. Any filters applied to the Date table will propagate to the OrderDate
column.
DAX
Orders = COUNTROWS(Sales)
Here's the Orders Shipped measure definition. It uses the USERELATIONSHIP DAX
function, which activates filter propagation for a specific relationship only during the
evaluation of the expression. In this example, the relationship to the ShipDate column is
used.
DAX
Orders Shipped =
CALCULATE(
COUNTROWS(Sales)
,USERELATIONSHIP('Date'[Date], Sales[ShipDate])
)
The report page filters by quarter 2019 Q4. The table visual groups by month and
displays various sales statistics. The Orders and Orders Shipped measures produce
different results. They each use the same summarization logic (count rows of the Sales
table), but different Date table filter propagation.
Notice that the quarter slicer includes a BLANK item. This slicer item appears as a result
of table expansion. While each Sales table row has an order date, some rows have a
BLANK ship date—these orders are yet to be shipped. Table expansion considers
inactive relationships too, and so BLANKs can appear due to BLANKs on the many-side
of the relationship, or due to data integrity issues.
7 Note
In specific circumstances, however, you can define one or more inactive relationships for
a role-playing dimension-type table. You can consider this design when:
Next steps
For more information related to this article, check out the following resources:
This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on when to create bi-directional model relationships. A bi-directional
relationship is one that filters in both directions.
7 Note
It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.
There are three scenarios when bi-directional filtering can solve specific requirements:
The first table is named Customer, and it contains three columns: Country-Region,
Customer, and CustomerCode. The second table is named Product, and it contains
three columns: Color, Product, and SKU. The third table is named Sales, and it contains
four columns: CustomerCode, OrderDate, Quantity, and SKU. The Customer and
Product tables are dimension-type tables, and each has a one-to-many relationship to
the Sales table. Each relationship filters in a single direction.
To help describe how bi-directional filtering works, the model diagram has been
modified to reveal the table rows. All examples in this article are based on this data.
7 Note
It's not possible to display table rows in the Power BI Desktop model diagram. It's
done in this article to support the discussion with clear examples.
The row details for the three tables are described in the following bulleted list:
The page consists of two slicers and a card visual. The first slicer is for Country-Region
and it has two items: Australia and United States. It currently slices by Australia. The
second slicer is for Product, and it has three items: Hat, Jeans, and T-shirt. No items are
selected (meaning no products are filtered). The card visual displays a quantity of 30.
When report users slice by Australia, you might want to limit the Product slicer to
display items where data relates to Australian sales. It's what's meant by showing slicer
items "with data". You can achieve this behavior by configuring the relationship between
the Product and Sales table to filter in both directions.
The Product slicer now lists a single item: T-shirt. This item represents the only product
sold to Australian customers.
We first suggest you consider carefully whether this design works for your report users.
Some report users find the experience confusing. They don't understand why slicer
items dynamically appear or disappear when they interact with other slicers.
If you do decide to show slicer items "with data", we don't recommend you configure
bi-directional relationships. Bi-directional relationships require more processing and so
they can negatively impact on query performance—especially as the number of bi-
directional relationships in your model increases.
There's a better way to achieve the same result: Instead of using bi-directional filters,
you can apply a visual-level filter to the Product slicer itself.
Let's now consider that the relationship between the Product and Sales table no longer
filters in both directions. And, the following measure definition has been added to the
Sales table.
DAX
To show the Product slicer items "with data", it simply needs to be filtered by the Total
Quantity measure using the "is not blank" condition.
Dimension-to-dimension analysis
A different scenario involving bi-directional relationships treats a fact-type table like a
bridging table. This way, it supports analyzing dimension-type table data within the filter
context of a different dimension-type table.
Using the example model in this article, consider how the following questions can be
answered:
Both questions can be answered without summarizing data in the bridging fact-type
table. They do, however, require that filters propagate from one dimension-type table to
the other. Once filters propagate via the fact-type table, summarization of dimension-
type table columns can be achieved using the DISTINCTCOUNT DAX function—and
possibly the MIN and MAX DAX functions.
As the fact-type table behaves like a bridging table, you can follow the many-to-many
relationship guidance to relate two dimension-type tables. It will require configuring at
least one relationship to filter in both directions. For more information, see Many-to-
many relationship guidance (Relate many-to-many dimensions).
However, as already described in this article, this design will likely result in a negative
impact on performance, and the user experience consequences related to slicer items
"with data". So, we recommend that you activate bi-directional filtering in a measure
definition by using the CROSSFILTER DAX function instead. The CROSSFILTER function
can be used to modify filter directions—or even disable the relationship—during the
evaluation of an expression.
Consider the following measure definition added to the Sales table. In this example, the
model relationship between the Customer and Sales tables has been configured to filter
in a single direction.
DAX
During the evaluation of the Different Countries Sold measure expression, the
relationship between the Customer and Sales tables filters in both directions.
The following table visual present statistics for each product sold. The Quantity column
is simply the sum of quantity values. The Different Countries Sold column represents
the distinct count of country-region values of all customers who have purchased the
product.
Next steps
For more information related to this article, check out the following resources:
This article targets you as a data modeler working with Power BI Desktop. It provides
you with guidance on how to troubleshoot specific issues you may encounter when
developing models and reports.
7 Note
It's also important that you have an understanding of star schema design. For more
information, see Understand star schema and the importance for Power BI.
Troubleshooting
When a report visual is configured to use fields from two (or more) tables, and it doesn't
present the correct result (or any result), it's possible that the issue is related to model
relationships.
In this case, here's a general troubleshooting checklist to follow. You can progressively
work through the checklist until you identify the issue(s).
1. Switch the visual to a table or matrix, or open the "See Data" pane—it's easier to
troubleshoot issues when you can see the query result
2. If there's an empty query result, switch to Data view—verify that tables have been
loaded with rows of data
3. Switch to Model view—it's easy to see the relationships and quickly determine
their properties
4. Verify that relationships exist between the tables
5. Verify that cardinality properties are correctly configured—they could be incorrect
if a "many"-side column presently contains unique values, and has been incorrectly
configured as a "one"-side
6. Verify that the relationships are active (solid line)
7. Verify that the filter directions support propagation (interpret arrow heads)
8. Verify that the correct columns are related—either select the relationship, or hover
the cursor over it, to reveal the related columns
9. Verify that the related column data types are the same, or at least compatible—it's
possible to relate a text column to a whole number column, but filters won't find
any matches to propagate
10. Switch to Data view, and verify that matching values can be found in related
columns
Troubleshooting guide
Here's a list of issues together with possible solutions.
The visual displays no result - The model is yet to be loaded with data
- No data exists within the filter context
- Row-level security is enforced
- Relationships aren't propagating between tables—follow
checklist above
- Row-level security is enforced, but a bi-directional relationship
isn't enabled to propagate—see Row-level security (RLS) with
Power BI Desktop
Next steps
For more information related to this article, check out the following resources:
This article targets data modelers developing Power BI DirectQuery models, developed
by using either Power BI Desktop or the Power BI service. It describes DirectQuery use
cases, limitations, and guidance. Specifically, the guidance is designed to help you
determine whether DirectQuery is the appropriate mode for your model, and to improve
the performance of your reports based on DirectQuery models. This article applies to
DirectQuery models hosted in the Power BI service or Power BI Report Server.
7 Note
For considerations when using DirectQuery storage mode for Dataverse, see Power
BI modeling guidance for Power Platform.
This article doesn't directly cover composite models. A Composite model consists of at
least one DirectQuery source, and possibly more. The guidance described in this article
is still relevant—at least in part—to Composite model design. However, the implications
of combining Import tables with DirectQuery tables aren't in scope for this article. For
more information, see Use composite models in Power BI Desktop.
7 Note
We understand that not all modelers have the permissions or skills to optimize a
relational database. While it is the preferred layer to prepare the data for a
DirectQuery model, some optimizations can also be achieved in the model design,
without modifying the source database. However, best optimization results are
often achieved by applying optimizations to the source database.
Design distributed tables: For Azure Synapse Analytics (formerly SQL Data
Warehouse) sources, which use Massively Parallel Processing (MPP) architecture,
consider configuring large fact-type tables as hash distributed, and dimension-
type tables to replicate across all the compute nodes. For more information, see
Guidance for designing distributed tables in Azure Synapse Analytics (formerly SQL
Data Warehouse).
Ensure required data transformations are materialized: For SQL Server relational
database sources (and other relational database sources), computed columns can
be added to tables. These columns are based on an expression, like Quantity
multiplied by UnitPrice. Computed columns can be persisted (materialized) and,
like regular columns, sometimes they can be indexed. For more information, see
Indexes on Computed Columns.
Consider also indexed views that can pre-aggregate fact table data at a higher
grain. For example, if the Sales table stores data at order line level, you could
create a view to summarize this data. The view could be based on a SELECT
statement that groups the Sales table data by date (at month level), customer,
product, and summarizes measure values like sales, quantity, etc. The view can
then be indexed. For SQL Server or Azure SQL Database sources, see Create
Indexed Views.
Avoid complex Power Query queries: An efficient model design can be achieved
by removing the need for the Power Query queries to apply any transformations. It
means that each query maps to a single relational database source table or view.
You can preview a representation of the actual SQL query statement for a Power
Query applied step, by selecting the View Native Query option.
Examine the use of calculated columns and data type changes: DirectQuery
models support adding calculations and Power Query steps to convert data types.
However, better performance is often achieved by materializing transformation
results in the relational database source, when possible.
Do not use Power Query relative date filtering: It's possible to define relative date
filtering in a Power Query query. For example, to retrieve to the sales orders that
were created in the last year (relative to today's date). This type of filter translates
to an inefficient native query, as follows:
SQL
…
from [dbo].[Sales] as [_]
where [_].[OrderDate] >= convert(datetime2, '2018-01-01 00:00:00') and
[_].[OrderDate] < convert(datetime2, '2019-01-01 00:00:00'))
A better design approach is to include relative time columns in the date table.
These columns store offset values relative to the current date. For example, in a
RelativeYear column, the value zero represents current year, -1 represents previous
year, etc. Preferably, the RelativeYear column is materialized in the date table.
While less efficient, it could also be added as a model calculated column, based on
the expression using the TODAY and DATE DAX functions.
The combined column can be created with either a Power Query custom column,
or in the model as a calculated column. However, it should be avoided as the
calculation expression will be embedded into the source queries. Not only is it
inefficient, it commonly prevents the use of indexes. Instead, add materialized
columns in the relational database source, and consider indexing them. You can
also consider adding surrogate key columns to dimension-type tables, which is a
common practice in relational data warehouse designs.
There's one exception to this guidance, and it concerns the use of the
COMBINEVALUES DAX function. The purpose of this function is to support multi-
column model relationships. Rather than generate an expression that the
relationship uses, it generates a multi-column SQL join predicate.
Limit parallel queries: You can set the maximum number of connections
DirectQuery opens for each underlying data source. It controls the number of
queries concurrently sent to the data source.
The setting is only enabled when there's at least one DirectQuery source in the
model. The value applies to all DirectQuery sources, and to any new DirectQuery
sources added to the model.
Increasing the Maximum Connections per Data Source value ensures more
queries (up to the maximum number specified) can be sent to the underlying data
source, which is useful when numerous visuals are on a single page, or many users
access a report at the same time. Once the maximum number of connections is
reached, further queries are queued until a connection becomes available.
Increasing this limit does result in more load on the underlying data source, so the
setting isn't guaranteed to improve overall performance.
When the model is published to Power BI, the maximum number of concurrent
queries sent to the underlying data source also depends on the environment.
Different environments (such as Power BI, Power BI Premium, or Power BI Report
Server) each can impose different throughput constraints. For more information
about Power BI Premium capacity resource limitations, see Deploying and
Managing Power BI Premium Capacities.
Optimize report designs
Reports based on a DirectQuery dataset can be optimized in many ways, as described in
the following bulleted list.
Apply filters first: When first designing reports, we recommend that you apply any
applicable filters—at report, page, or visual level—before mapping fields to the
visual fields. For example, rather than dragging in the CountryRegion and Sales
measures, and then filtering by a particular year, apply the filter on the Year field
first. It's because each step of building a visual will send a query, and while it's
possible to then make another change before the first query has completed, it still
places unnecessary load on the underlying data source. By applying filters early, it
generally makes those intermediate queries less costly and faster. Also, failing to
apply filters early can result in exceeding the 1 million-row limit, as described in
about DirectQuery.
Limit the number of visuals on a page: When a report page is opened (and when
page filters are applied) all of the visuals on a page are refreshed. However, there's
a limit on the number of queries that can be sent in parallel, imposed by the Power
BI environment and the Maximum Connections per Data Source model setting, as
described above. So, as the number of page visuals increases, there's higher
chance that they'll be refreshed in a serial manner. It increases the time taken to
refresh the entire page, and it also increases the chance that visuals may display
inconsistent results (for volatile data sources). For these reasons, it's recommended
to limit the number of visuals on any page, and instead have more simpler pages.
Replacing multiple card visuals with a single multi-row card visual can achieve a
similar page layout.
In addition to the above list of optimization techniques, each of the following reporting
capabilities can contribute to performance issues:
Measure filters: Visuals containing measures (or aggregates of columns) can have
filters applied to those measures. For example, the visual below shows Sales by
Category, but only for categories with more than $15 million of sales.
It may result in two queries being sent to the underlying source:
The first query will retrieve the categories meeting the condition (Sales > $15
million)
The second query will then retrieve the necessary data for the visual, adding the
categories that met the condition to the WHERE clause
TopN filters: Advanced filters can be defined to filter on only the top (or bottom) N
values ranked by a measure. For example, to display only the top five categories in
the above visual. Like the measure filters, it will also result in two queries being
sent to the underlying data source. However, the first query will return all
categories from the underlying source, and then the top N are determined based
on the returned results. Depending on the cardinality of the column involved, it
can lead to performance issues (or query failures due to the 1 million-row limit).
Median: Generally, any aggregation (Sum, Count Distinct, etc.) is pushed to the
underlying source. However, it's not true for Median, as this aggregate isn't
supported by the underlying source. In such cases, detail data is retrieved from the
underlying source, and Power BI evaluates the median from the returned results.
It's fine when the median is to be calculated over a relatively small number of
results, but performance issues (or query failures due to the 1 million-row limit) will
occur if the cardinality is large. For example, median country/region population
might be reasonable, but median sales price might not be.
Visual totals: By default, tables and matrices display totals and subtotals. In many
cases, additional queries must be sent to the underlying source to obtain the
values for the totals. It applies whenever using Count Distinct or Median
aggregates, and in all cases when using DirectQuery over SAP HANA or SAP
Business Warehouse. Such totals should be switched off (by using the Format
pane) if not necessary.
There are many functional and performance enhancements that can be achieved by
converting a DirectQuery model to a Composite model. A Composite model can
integrate more than one DirectQuery source, and it can also include aggregations.
Aggregation tables can be added to DirectQuery tables to import a summarized
representation of the table. They can achieve dramatic performance enhancements
when visuals query higher-level aggregates. For more information, see Aggregations in
Power BI Desktop.
Educate users
It's important to educate your users on how to efficiently work with reports based on
DirectQuery datasets. Your report authors should be educated on the content described
in the Optimize report designs section.
We recommend that you educate your report consumers about your reports that are
based on DirectQuery datasets. It can be helpful for them to understand the general
data architecture, including any relevant limitations described in this article. Let them
know to expect that refresh responses and interactive filtering may at times be slow.
When report users understand why performance degradation happens, they're less likely
to lose trust in the reports and data.
When delivering reports on volatile data sources, be sure to educate report users on the
use of the Refresh button. Let them know also that it may be possible to see
inconsistent results, and that a refresh of the report can resolve any inconsistencies on
the report page.
Next steps
For more information about DirectQuery, check out the following resources:
This article targets data modelers developing Power BI composite models. It describes
composite model use cases, and provides you with design guidance. Specifically, the
guidance can help you determine whether a composite model is appropriate for your
solution. If it is, then this article will also help you design optimal composite models and
reports.
7 Note
Because composite models consist of at least one DirectQuery source, it's also
important that you have a thorough understanding of model relationships,
DirectQuery models, and DirectQuery model design guidance.
7 Note
When a model connects to a tabular model but doesn't extend it with additional
data, it's not a composite model. In this case, it's a DirectQuery model that
connects to a remote model—so it comprises just the one source group. You might
create this type of model to modify source model object properties, like a table
name, column sort order, or format string.
Connecting to tabular models is especially relevant when extending an enterprise
semantic model (when it's a Power BI dataset or Analysis Services model). An enterprise
semantic model is fundamental to the development and operation of a data warehouse.
It provides an abstraction layer over the data in the data warehouse to present business
definitions and terminology. It's commonly used as a link between physical data models
and reporting tools, like Power BI. In most organizations, it's managed by a central team,
and that's why it's described as enterprise. For more information, see the enterprise BI
usage scenario.
Your model could be a DirectQuery model, and you want to boost performance. In
a composite model, you can improve performance by setting up appropriate
storage for each table. You can also add user-defined aggregations. Both of these
optimizations are described later in this article.
You want to combine a DirectQuery model with more data, which must be
imported into the model. You can load imported data from a different data source,
or from calculated tables.
You want to combine two or more DirectQuery data sources into a single model.
These sources could be relational databases or other tabular models.
7 Note
Whenever possible, it's best to develop a model in import mode. This mode provides
the greatest design flexibility, and best performance.
However, challenges related to large data volumes, or reporting on near real-time data,
can't always be solved by import models. In either of these cases, you can consider a
DirectQuery model, providing your data is stored in a single data source that's
supported by DirectQuery mode. For more information, see DirectQuery models in
Power BI Desktop.
Tip
If your objective is only to extend an existing tabular model with more data,
whenever possible, add that data to the existing data source.
DirectQuery: We recommend that you set this mode for tables that represent large
data volumes, or which need to deliver near real-time results. Data will never be
imported into these tables. Usually, these tables will be fact-type tables, which are
tables that are summarized.
Import: We recommend that you set this mode for tables that aren't used for
filtering and grouping of fact tables in DirectQuery or Hybrid mode. It's also the
only option for tables based on sources not supported by DirectQuery mode.
Calculated tables are always import tables.
Dual: We recommend that you set this mode for dimension-type tables, when
there's a possibility they'll be queried together with DirectQuery fact-type tables
from the same source.
Hybrid: We recommend that you set this mode by adding import partitions, and
one DirectQuery partition to a fact table when you want to include the latest data
changes in real time, or when you want to provide fast access to the most
frequently used data through import partitions while leaving the bulk of more
infrequently used data in the data warehouse.
There are several possible scenarios when Power BI queries a composite model.
Queries only import or dual table(s): Power BI retrieves all data from the model
cache. It will deliver the fastest possible performance. This scenario is common for
dimension-type tables queried by filters or slicer visuals.
Queries dual table(s) or DirectQuery table(s) from the same source: Power BI
retrieves all data by sending one or more native queries to the DirectQuery source.
It will deliver good performance, especially when appropriate indexes exist on the
source tables. This scenario is common for queries that relate dual dimension-type
tables and DirectQuery fact-type tables. These queries are intra source group, and
so all one-to-one or one-to-many relationships are evaluated as regular
relationships.
Queries dual table(s) or hybrid table(s) from the same source: This scenario is a
combination of the previous two scenarios. Power BI retrieves data from the model
cache when it's available in import partitions, otherwise it sends one or more
native queries to the DirectQuery source. It will deliver the fastest possible
performance because only a slice of the data is queried in the data warehouse,
especially when appropriate indexes exist on the source tables. As for the dual
dimension-type tables and DirectQuery fact-type tables, these queries are intra
source group, and so all one-to-one or one-to-many relationships are evaluated as
regular relationships.
All other queries: These queries involve cross source group relationships. It's either
because an import table relates to a DirectQuery table, or a dual table relates to a
DirectQuery table from a different source—in which case it behaves as an import
table. All relationships are evaluated as limited relationships. It also means that
groupings applied to non-DirectQuery tables must be sent to the DirectQuery
source as materialized subqueries (virtual tables). In this case, the native query can
be inefficient, especially for large grouping sets.
When aggregations are cached in the model, they behave as import tables (although
they can't be used like a model table). Adding import aggregations to a DirectQuery
model will result in a composite model.
7 Note
Hybrid tables don't support aggregations because some of the partitions operate
in import mode. It's not possible to add aggregations at the level of an individual
DirectQuery partition.
We recommend that an aggregation follows a basic rule: Its row count should be at least
a factor of 10 smaller than the underlying table. For example, if the underlying table
stores 1 billion rows, then the aggregation table shouldn't exceed 100 million rows. This
rule ensures that there's an adequate performance gain relative to the cost of creating
and maintaining the aggregation.
7 Note
In some situations, you can avoid creating a cross source group relationship. See
the Use Sync slicers topic later in this article.
2 Warning
In this scenario, the Region table in source group A has a relationship to the Date table
and Sales table in source group B. The relationship between the Region table and the
Date table is active, while the relationship between the Region table and the Sales table
is inactive. Also, there's an active relationship between the Region table and the Sales
table, both of which are in source group B. The Sales table includes a measure named
TotalSales, and the Region table includes two measures named RegionalSales and
RegionalSalesDirect.
Here are the measure definitions.
DAX
TotalSales = SUM(Sales[Sales])
RegionalSales = CALCULATE([TotalSales], USERELATIONSHIP(Region[RegionID],
Sales[RegionID]))
RegionalSalesDirect = CALCULATE(SUM(Sales[Sales]),
USERELATIONSHIP(Region[RegionID], Sales[RegionID]))
Notice how the RegionalSales measure refers to the TotalSales measure, while the
RegionalSalesDirect measure doesn't. Instead, the RegionalSalesDirect measure uses
the expression SUM(Sales[Sales]) , which is the expression of the TotalSales measure.
The difference in the result is subtle. When Power BI evaluates the RegionalSales
measure, it applies the filter from the Region table to both the Sales table and the Date
table. Therefore, the filter also propagates from the Date table to the Sales table. In
contrast, when Power BI evaluates the RegionalSalesDirect measure, it only propagates
the filter from the Region table to the Sales table. The results returned by RegionalSales
measure and the RegionalSalesDirect measure could differ, even though the
expressions are semantically equivalent.
) Important
Whenever you use the CALCULATE function with an expression that's a measure in a
remote source group, test the calculation results thoroughly.
In this scenario, the Date table is related to the Sales table on the DateKey columns. The
data type of the DateKey columns is integer, storing whole numbers that use the
yyyymmdd format. The tables belong to different source groups. Further, it's a high-
cardinality relationship because the earliest date in the Date table is January 1, 1900 and
the latest date is December 31, 2100—so there's a total of 73,414 rows in the table (one
row for each date in the 1900-2100 time span).
There are two cases for concern.
First, when you use the Date table columns as filters, filter propagation will filter the
DateKey column of the Sales table to evaluate measures. When filtering by a single year,
like 2022, the DAX query will include a filter expression like Sales[DateKey] IN {
20220101, 20220102, …20221231 } . The text size of the query can grow to become
extremely large when the number of values in the filter expression is large, or when the
filter values are long strings. It's expensive for Power BI to generate the long query and
for the data source to run the query.
Second, when you use Date table columns—like Year, Quarter, or Month—as grouping
columns, it results in filters that include all unique combinations of year, quarter, or
month, and the DateKey column values. The string size of the query, which contains
filters on the grouping columns and the relationship column, can become extremely
large. That's especially true when the number of grouping columns and/or the
cardinality of the join column (the DateKey column) is large.
Add the Date table to the data source, resulting in a single source group model
(meaning, it's no longer a composite model).
Raise the granularity of the relationship. For instance, you could add a MonthKey
column to both tables and create the relationship on those columns. However, by
raising the granularity of the relationship, you lose the ability to report on daily
sales activity (unless you use the DateKey column from the Sales table).
In this scenario, the Date table in source group B has a relationship to the Sales table in
that source group, and also to the Target table in source group A. All relationships are
one-to-many from the Date table relating the Year columns. The Sales table includes a
SalesAmount column that stores sales amounts, while the Target table includes a
TargetAmount column that stores target amounts.
The Date table stores the years 2021 and 2022. The Sales table stores sales amounts for
years 2021 (100) and 2022 (200), while the Target table stores target amounts for 2021
(100), 2022 (200), and 2023 (300)—a future year.
When a Power BI table visual queries the composite model by grouping on the Year
column from the Date table and summing the SalesAmount and TargetAmount
columns, it won't show a target amount for 2023. That's because the cross source group
relationship is a limited relationship, and so it uses INNER JOIN semantics, which
eliminate rows where there's no matching value on both sides. It will, however, produce
a correct target amount total (600), because a Date table filter doesn't apply to its
evaluation.
If the relationship between the Date table and the Target table is an intra source group
relationship (assuming the Target table belonged to source group B), the visual will
include a (Blank) year to show the 2023 (and any other unmatched years) target amount.
) Important
To avoid misreporting, ensure that there are matching values in the relationship
columns when dimension and fact tables reside in different source groups.
Calculations
You should consider specific limitations when adding calculated columns and calculation
groups to a composite model.
Calculated columns
Calculated columns added to a DirectQuery table that source their data from a relational
database, like Microsoft SQL Server, are limited to expressions that operate on a single
row at a time. These expressions can't use DAX iterator functions, like SUMX , or filter
context modification functions, like CALCULATE .
7 Note
It's not possible to added calculated columns or calculated tables that depend on
chained tabular models.
model. However, it will result in an error when it's used in a visual because it violates the
intra-row evaluation restriction.
7 Note
When you add calculated columns to a composite model, be sure to test all model
calculations. Upstream calculations may not work correctly because they didn't
consider their influence on the filter context.
Calculation groups
If calculation groups exist in a source group that connects to a Power BI dataset or an
Analysis Services model, Power BI could return unexpected results. For more
information, see Calculation groups, query and measure evaluation.
Model design
You should always optimize a Power BI model by adopting a star schema design.
Tip
For more information, see Understand star schema and the importance for Power
BI.
Be sure to create dimension tables that are separate from fact tables so that Power BI
can interpret joins correctly and produce efficient query plans. While this guidance is
true for any Power BI model, it's especially true for models that you recognize will
become a source group of a composite model. It will allow for simpler and more
efficient integration of other tables in downstream models.
Whenever possible, avoid having dimension tables in one source group that relate to a
fact table in a different source group. That's because it's better to have intra source
group relationships than cross source group relationships, especially for high-cardinality
relationship columns. As described earlier, cross source group relationships rely on
having matching values in the relationship columns, otherwise unexpected results may
be shown in report visuals.
Row-level security
If your model includes user-defined aggregations, calculated columns on import tables,
or calculated tables, ensure that any row-level security (RLS) is set up correctly and
tested.
If the composite model connects to other tabular models, RLS rules are only applied on
the source group (local model) where they're defined. They won't be applied to other
source groups (remote models). Also, you can't define RLS rules on a table from another
source group nor can you define RLS rules on a local table that has a relationship to
another source group.
Report design
In some situations, you can improve the performance of a composite model by
designing an optimized report layout.
Consider a scenario when your model has two source groups. Each source group has a
product dimension table used to filter reseller and internet sales.
In this scenario, source group A contains the Product table that's related to the
ResellerSales table. Source group B contains the Product2 table that's related to the
InternetSales table. There aren't any cross source group relationships.
In the report, you add a slicer that filters the page by using the Color column of the
Product table. By default, the slicer filters the ResellerSales table, but not the
InternetSales table. You then add a hidden slicer by using the Color column of the
Product2 table. By setting an identical group name (found in the sync slicers Advanced
options), filters applied to the visible slicer automatically propagate to the hidden slicer.
7 Note
While using sync slicers can avoid the need to create a cross source group
relationship, it increases the complexity of the model design. Be sure to educate
other users on why you designed the model with duplicate dimension tables. Avoid
confusion by hiding dimension tables that you don't want other users to use. You
can also add description text to the hidden tables to document their purpose.
Other guidance
Here's some other guidance to help you design and maintain composite models.
Performance and scale: If your reports were previously live connected to a Power
BI dataset or Analysis Services model, the Power BI service could reuse visual
caches across reports. After you convert the live connection to create a local
DirectQuery model, reports will no longer benefit from those caches. As a result,
you might experience slower performance or even refresh failures. Also, the
workload for the Power BI service will increase, which might require you to scale up
your capacity or distribute the workload across other capacities. For more
information about data refresh and caching, see Data refresh in Power BI.
Renaming: We don't recommend that you rename datasets used by composite
models, or rename their workspaces. That's because composite models connect to
Power BI datasets by using the workspace and dataset names (and not their
internal unique identifiers). Renaming a dataset or workspace could break the
connections used by your composite model.
Governance: We don't recommend that your single version of the truth model is a
composite model. That's because it would be dependent on other data sources or
models, which if updated, could result in breaking the composite model. Instead,
we recommended that you publish an enterprise semantic model as the single
version of truth. Consider this model to be a reliable foundation. Other data
modelers can then create composite models that extend the foundation model to
create specialized models.
Data lineage: Use the data lineage and dataset impact analysis features before
publishing composite model changes. These features are available in the Power BI
service, and they can help you to understand how datasets are related and used.
It's important to understand that you can't perform impact analysis on external
datasets that are displayed in lineage view but are in fact located in another
workspace. To perform impact analysis on an external dataset, you need to
navigate to the source workspace.
Schema updates: You should refresh your composite model in Power BI Desktop
when schema changes are made to upstream data sources. You'll then need to
republish the model to the Power BI service. Be sure to thoroughly test calculations
and dependent reports.
Next steps
For more information related to this article, check out the following resources.
This article targets you as a data modeler working with Power BI Desktop. It describes
good design practices for enforcing row-levels security (RLS) in your data models.
It's important to understand RLS filters table rows. They can't be configured to restrict
access to model objects, including tables, columns, or measures.
7 Note
This article doesn't describe RLS or how to set it up. For more information, see
Restrict data access with row-level security (RLS) for Power BI Desktop.
Create roles
It's possible to create multiple roles. When you're considering the permission needs for
a single report user, strive to create a single role that grants all those permissions,
instead of a design where a report user will be a member of multiple roles. It's because a
report user could map to multiple roles, either directly by using their user account or
indirectly by security group membership. Multiple role mappings can result in
unexpected outcomes.
When a report user is assigned to multiple roles, RLS filters become additive. It means
report users can see table rows that represent the union of those filters. What's more, in
some scenarios it's not possible to guarantee that a report user doesn't see rows in a
table. So, unlike permissions applied to SQL Server database objects (and other
permission models), the "once denied always denied" principle doesn't apply.
Consider a model with two roles: The first role, named Workers, restricts access to all
Payroll table rows by using the following rule expression:
DAX
FALSE()
7 Note
A rule will return no table rows when its expression evaluates to FALSE .
Yet, a second role, named Managers, allows access to all Payroll table rows by using the
following rule expression:
DAX
TRUE()
Take care: Should a report user map to both roles, they'll see all Payroll table rows.
Optimize RLS
RLS works by automatically applying filters to every DAX query, and these filters may
have a negative impact on query performance. So, efficient RLS comes down to good
model design. It's important to follow model design guidance, as discussed in the
following articles:
In general, it's often more efficient to enforce RLS filters on dimension-type tables, and
not fact-type tables. And, rely on well-designed relationships to ensure RLS filters
propagate to other model tables. RLS filters only propagate through active relationships.
So, avoid using the LOOKUPVALUE DAX function when model relationships could
achieve the same result.
Whenever RLS filters are enforced on DirectQuery tables and there are relationships to
other DirectQuery tables, be sure to optimize the source database. It can involve
designing appropriate indexes or using persisted computed columns. For more
information, see DirectQuery model guidance in Power BI Desktop.
Members can be user accounts, security groups, distribution groups or mail enabled
groups. Whenever possible, we recommend you map security groups to dataset roles. It
involves managing security group memberships in Azure Active Directory. Possibly, it
delegates the task to your network administrators.
Validate roles
Test each role to ensure it filters the model correctly. It's easily done by using the View
As command on the Modeling ribbon tab.
When the model has dynamic rules using the USERNAME DAX function, be sure to test
for expected and unexpected values. When embedding Power BI content—specifically
using the embed for your customers scenario—app logic can pass any value as an
effective identity user name. Whenever possible, ensure accidental or malicious values
result in filters that return no rows.
Consider an example using Power BI embedded, where the app passes the user's job
role as the effective user name: It's either "Manager" or "Worker". Managers can see all
rows, but workers can only see rows where the Type column value is "Internal".
DAX
IF(
USERNAME() = "Worker",
[Type] = "Internal",
TRUE()
)
The problem with this rule expression is that all values, except "Worker", return all table
rows. So, an accidental value, like "Wrker", unintentionally returns all table rows.
Therefore, it's safer to write an expression that tests for each expected value. In the
following improved rule expression, an unexpected value results in the table returning
no rows.
DAX
IF(
USERNAME() = "Worker",
[Type] = "Internal",
IF(
USERNAME() = "Manager",
TRUE(),
FALSE()
)
)
While it's not possible for a DAX expression to override RLS—in fact, it can't even
determine that RLS is enforced—you can use a summary model table. The summary
model table is queried to retrieve revenue for "all regions" and it's not constrained by
any RLS filters.
Let's see how you could implement this design requirement. First, consider the following
model design:
The model comprises four tables:
The Salesperson table stores one row per salesperson. It includes the
EmailAddress column, which stores the email address for each salesperson. This
table is hidden.
The Sales table stores one row per order. It includes the Revenue % All Region
measure, which is designed to return a ratio of revenue earned by the report user's
region over revenue earned by all regions.
The Date table stores one row per date and allows filtering and grouping year and
month.
The SalesRevenueSummary is a calculated table. It stores total revenue for each
order date. This table is hidden.
DAX
SalesRevenueSummary =
SUMMARIZECOLUMNS(
Sales[OrderDate],
"RevenueAllRegion", SUM(Sales[Revenue])
)
7 Note
DAX
[EmailAddress] = USERNAME()
Relationship Description
DAX
7 Note
Take care to avoid disclosing sensitive facts. If there are only two regions in this
example, then it would be possible for a report user to calculate revenue for the
other region.
For example, a company that has just two sales regions decides to publish a dataset for
each sales region to different workspaces. The datasets don't enforce RLS. They do,
however, use query parameters to filter source data. This way, the same model is
published to each workspace—they just have different dataset parameter values.
Salespeople are assigned access to just one of the workspaces (or published apps).
Multiple workspaces: One workspace is required for each report user audience. If
apps are published, it also means there's one app per report user audience.
Duplication of content: Reports and dashboards must be created in each
workspace. It requires more effort and time to set up and maintain.
High privilege users: High privilege users, who belong to multiple report user
audiences, can't see a consolidated view of the data. They'll need to open multiple
reports (from different workspaces or apps).
Troubleshoot RLS
If RLS produces unexpected results, check for the following issues:
When a specific user can't see any data, it could be because their UPN isn't stored or it's
entered incorrectly. It can happen abruptly because their user account has changed as
the result of a name change.
Tip
For testing purposes, add a measure that returns the USERNAME DAX function.
You might name it something like "Who Am I". Then, add the measure to a card
visual in a report and publish it to Power BI.
Creators and consumers with only Read permission on the dataset will only be able to
view the data they're allowed to see (based on their RLS role mapping).
When a user views a report in either a workspace or an app, RLS may or may not be
enforced depending on their dataset permissions. For this reason, it's critical that
content consumers and creators only possess Read permission on the underlying
dataset when RLS must be enforced. For details about the permissions rules that
determine whether RLS is enforced, see the Report consumer security planning article.
Next steps
For more information related to this article, check out the following resources:
Microsoft Dataverse is the standard data platform for many Microsoft business
application products, including Dynamics 365 Customer Engagement and Power Apps
canvas apps, and also Dynamics 365 Customer Voice (formerly Microsoft Forms Pro),
Power Automate approvals, Power Apps portals, and others.
This article provides guidance on how to create a Power BI data model that connects to
Dataverse. It describes differences between a Dataverse schema and an optimized Power
BI schema, and it provides guidance for expanding the visibility of your business
application data in Power BI.
Because of its ease of setup, rapid deployment, and widespread adoption, Dataverse
stores and manages an increasing volume of data in environments across organizations.
That means there's an even greater need—and opportunity—to integrate analytics with
those processes. Opportunities include:
Report on all Dataverse data moving beyond the constraints of the built-in charts.
Provide easy access to relevant, contextually filtered reports within a specific
record.
Enhance the value of Dataverse data by integrating it with external data.
Take advantage of Power BI's built-in artificial intelligence (AI) without the need to
write complex code.
Increase adoption of Power Platform solutions by increasing their usefulness and
value.
Deliver the value of the data in your app to business decision makers.
Import Dataverse data by using the Dataverse connector: This method caches
(stores) Dataverse data in a Power BI model. It delivers fast performance thanks to
in-memory querying. It also offers design flexibility to modelers, allowing them to
integrate data from other sources. Because of these strengths, importing data is
the default mode when creating a model in Power BI Desktop.
Import Dataverse data by using Azure Synapse Link: This method is a variation on
the import method, because it also caches data in the Power BI model, but does so
by connecting to Azure Synapse Analytics. By using Azure Synapse Link for
Dataverse, Dataverse tables are continuously replicated to Azure Synapse or Azure
Data Lake Storage (ADLS) Gen2. This approach is used to report on hundreds of
thousands or even millions of records in Dataverse environments.
Create a DirectQuery connection by using the Dataverse connector: This method
is an alternative to importing data. A DirectQuery model consists only of metadata
defining the model structure. When a user opens a report, Power BI sends native
queries to Dataverse to retrieve data. Consider creating a DirectQuery model when
reports must show near real-time Dataverse data, or when Dataverse must enforce
role-based security so that users can only see the data they have privileges to
access.
) Important
While a DirectQuery model can be a good alternative when you need near real-
time reporting or enforcement of Dataverse security in a report, it can result in slow
performance for that report.
You can learn about considerations for DirectQuery later in this article.
To determine the right method for your Power BI model, you should consider:
Query performance
Data volume
Data latency
Role-based security
Setup complexity
Tip
Query performance
Queries sent to import models are faster than native queries sent to DirectQuery data
sources. That's because imported data is cached in memory and it's optimized for
analytic queries (filter, group, and summarize operations).
Conversely, DirectQuery models only retrieve data from the source after the user opens
a report, resulting in seconds of delay as the report renders. Additionally, user
interactions on the report require Power BI to requery the source, further reducing
responsiveness.
Data volume
When developing an import model, you should strive to minimize the data that's loaded
into the model. It's especially true for large models, or models that you anticipate will
grow to become large over time. For more information, see Data reduction techniques
for import modeling.
A DirectQuery connection to Dataverse is a good choice when the report's query result
isn't large. A large query result has more than 20,000 rows in the report's source tables,
or the result returned to the report after filters are applied is more than 20,000 rows. In
this case, you can create a Power BI report by using the Dataverse connector.
7 Note
The 20,000 row size isn't a hard limit. However, each data source query must return
a result within 10 minutes. Later in this article you will learn how to work within
those limitations and about other Dataverse DirectQuery design considerations.
You can improve the performance of larger datasets by using the Dataverse connector
to import the data into the data model.
Data latency
When the Dataverse data changes rapidly and report users need to see up-to-date data,
a DirectQuery model can deliver near real-time query results.
Tip
You can create a Power BI report that uses automatic page refresh to show real-
time updates, but only when the report connects to a DirectQuery model.
Import data models must complete a data refresh to allow reporting on recent data
changes. Keep in mind that there are limitations on the number of daily scheduled data
refresh operations. You can schedule up to eight refreshes per day on a shared capacity.
On a Premium capacity, you can schedule up to 48 refreshes per day, which can achieve
a 15-minute refresh frequency.
You can also consider using incremental refresh to achieve faster refreshes and near
real-time performance (only available with Premium).
Role-based security
When there's a need to enforce role-based security, it can directly influence the choice
of Power BI model framework.
Dataverse can enforce complex role-based security to control access of specific records
to specific users. For example, a salesperson may be permitted to see only their sales
opportunities, while the sales manager can see all sales opportunities for all salespeople.
You can tailor the level of complexity based on the needs of your organization.
A DirectQuery model based on Dataverse can connect by using the security context of
the report user. That way, the report user will only see the data they're permitted to
access. This approach can simplify the report design, providing performance is
acceptable.
For improved performance, you can create an import model that connects to Dataverse
instead. In this case, you can add row-level security (RLS) to the model, if necessary.
7 Note
For more information about Power BI RLS, see Row-level security (RLS) guidance in
Power BI Desktop.
Setup complexity
Using the Dataverse connector in Power BI—whether for import or DirectQuery models
—is straightforward and doesn't require any special software or elevated Dataverse
permissions. That's an advantage for organizations or departments that are getting
started.
The Azure Synapse Link option requires system administrator access to Dataverse and
certain Azure permissions. These Azure permissions are required to set up the storage
account and a Synapse workspace.
Recommended practices
This section describes design patterns (and anti-patterns) you should consider when
creating a Power BI model that connects to Dataverse. Only a few of these patterns are
unique to Dataverse, but they tend to be common challenges for Dataverse makers
when they go about building Power BI reports.
This recommendation is probably the most common and easily the most challenging
anti-pattern to avoid. Attempting to build a single model that achieves all self-service
reporting needs is challenging. The reality is that successful models are built to answer
questions around a central set of facts over a single core topic. While that might initially
seem to limit the model, it's actually empowering because you can tune and optimize
the model for answering questions within that topic.
To help ensure that you have a clear understanding of the model's purpose, ask yourself
the following questions.
Resist combining multiple topic areas into a single model just because the report user
has questions across multiple topic areas that they want addressed by a single report. By
breaking that report out into multiple reports, each with a focus on a different topic (or
fact table), you can produce much more efficient, scalable, and manageable models.
Dataverse, as a relational model, is well suited for its purpose. However, it's not
designed as an analytic model that's optimized for analytical reports. The most prevalent
pattern for modeling analytics data is a star schema design. Star schema is a mature
modeling approach widely adopted by relational data warehouses. It requires modelers
to classify their model tables as either dimension or fact. Reports can filter or group by
using dimension table columns and summarize fact table columns.
For more information, see Understand star schema and the importance for Power BI.
The source system, in this case Dataverse, then only needs to deliver filtered or
summarized results to Power BI. A folded query is often significantly faster and more
efficient than a query that doesn't fold.
For more information on how you can achieve query folding, see Power Query query
folding.
7 Note
We recommend that you only retrieve columns that are required by reports. It's often a
good idea to reevaluate and refactor queries when report development is complete,
allowing you to identify and remove unused columns. For more information, see Data
reduction techniques for import modeling (Remove unnecessary columns).
Additionally, ensure that you introduce the Power Query Remove columns step early so
that it folds back to the source. That way, Power Query can avoid the unnecessary work
of extracting source data only to discard it later (in an unfolded step).
When you have a table that contains many columns, it might be impractical to use the
Power Query interactive query builder. In this case, you can start by creating a blank
query. You can then use the Advanced Editor to paste in a minimal query that creates a
starting point.
Consider the following query that retrieves data from just two columns of the account
table.
Power Query M
let
Source = CommonDataService.Database("demo.crm.dynamics.com",
[CreateNavigationProperties=false]),
dbo_account = Source{[Schema="dbo", Item="account"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(dbo_account,
{"accountid", "name"})
in
#"Removed Other Columns"
When using this function, it's important to add the EnableFolding=true option to ensure
queries are folded back to the Dataverse service. A native query won't fold unless this
option is added. Enabling this option can result in significant performance
improvements—up to 97 percent faster in some cases.
Consider the following query that uses a native query to source selected columns from
the account table. The native query will fold because the EnableFolding=true option is
set.
Power Query M
let
Source = CommonDataService.Database("demo.crm.dynamics.com"),
dbo_account = Value.NativeQuery(
Source,
"SELECT A.accountid, A.name FROM account A"
,null
,[EnableFolding=true]
)
in
dbo_account
You can expect to achieve the greatest performance improvements when retrieving a
subset of data from a large data volume.
Tip
Performance improvement can also depend on how Power BI queries the source
database. For example, a measure that uses the COUNTDISTINCT DAX function
showed almost no improvement with or without the folding hint. When the
measure formula was rewritten to use the SUMX DAX function, the query folded
resulting in a 97 percent improvement over the same query without the hint.
The evaluation stage of a data import iterates through the metadata of its source to
determine all possible table relationships. That metadata can be extensive, especially for
Dataverse. By adding this option to the query, you're letting Power Query know that you
don't intend to use those relationships. The option allows Power BI Desktop to skip that
stage of the refresh and move on to retrieving the data.
7 Note
Don't use this option when the query depends on any expanded relationship
columns.
Consider an example that retrieves data from the account table. It contains three
columns related to territory: territory, territoryid, and territoryidname.
When you set the CreateNavigationProperties=false option, the territoryid and
territoryidname columns will remain, but the territory column, which is a relationship
column (it shows Value links), will be excluded. It's important to understand that Power
Query relationship columns are a different concept to model relationships, which
propagate filters between model tables.
Consider the following query that uses the CreateNavigationProperties=false option (in
the Source step) to speed up the evaluation stage of a data import.
Power Query M
let
Source = CommonDataService.Database("demo.crm.dynamics.com"
,[CreateNavigationProperties=false]),
dbo_account = Source{[Schema="dbo", Item="account"]}[Data],
#"Removed Other Columns" = Table.SelectColumns(dbo_account,
{"accountid", "name", "address1_stateorprovince", "address1_country",
"industrycodename", "territoryidname"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Other Columns",
{{"name", "Account Name"}, {"address1_country", "Country"},
{"address1_stateorprovince", "State or Province"}, {"territoryidname",
"Territory"}, {"industrycodename", "Industry"}})
in
#"Renamed Columns"
7 Note
This option can improve the performance of data refresh of import tables or dual
storage mode tables, including the process of applying Power Query Editor
window changes. It doesn't improve the performance of interactive cross-filtering
of DirectQuery storage mode tables.
In this case, open the Dataverse Maker Portal, navigate to the Solutions area, and then
select Publish all customizations. The publication process will update the TDS endpoint
with the latest metadata, making the option labels available to Power BI.
Azure Synapse Link enables a continuous replication of the data and metadata from
Dataverse into the data lake. It also provides a built-in serverless SQL pool as a
convenient data source for Power BI queries.
The strengths of this approach are significant. Customers gain the ability to run
analytics, business intelligence, and machine learning workloads across Dataverse data
by using various advanced services. Advanced services include Apache Spark, Power BI,
Azure Data Factory, Azure Databricks, and Azure Machine Learning.
The setup involves signing in to Power Apps and connecting Dataverse to the Azure
Synapse workspace. A wizard-like experience allows you to create a new link by
selecting the storage account and the tables to export. Azure Synapse Link then copies
data to the ADLS Gen2 storage and automatically creates views in the built-in Azure
Synapse serverless SQL pool. You can then connect to those views to create a Power BI
model.
Tip
You can create a serverless SQL database in the Azure Synapse workspace by using
Azure Synapse Studio. Select Serverless as the SQL database type and enter a database
name. Power Query can connect to this database by connecting to the workspace SQL
endpoint.
SQL
Notice that the view includes only four columns, each aliased with a friendly name.
There's also a WHERE clause to return only necessary rows, in this case active campaigns.
Also, the view queries the campaign table that's joined to the OptionsetMetadata and
StatusMetadata tables, which retrieve choice labels.
Tip
For more information on how to retrieve metadata, see Access choice labels
directly from Azure Synapse Link for Dataverse.
Near real-time data: Provides a copy of data synchronized from Dataverse via
Azure Synapse Link in an efficient manner by detecting what data has changed
since it was initially extracted or last synchronized.
Snapshot data: Provides a read-only copy of near real-time data that's updated at
regular intervals (in this case every hour). Snapshot data table names have
_partitioned appended to their name.
If you anticipate that a high volume of read and write operations will be executed
simultaneously, retrieve data from the snapshot tables to avoid query failures.
For more information, see Access near real-time data and read-only snapshot data.
We recommend that you consider the topics in this section when working with
DirectQuery.
For more information about determining when to work with DirectQuery storage mode,
see Choose a Power BI model framework.
You should consider setting dimension tables to dual storage mode, when appropriate.
That way, slicer visuals and filter card lists—which are often based on dimension table
columns—will render more quickly because they'll be queried from imported data.
) Important
When a dimension table needs to inherit the Dataverse security model, it isn't
appropriate to use dual storage mode.
Fact tables, which typically store large volumes of data, should remain as DirectQuery
storage mode tables. They'll be filtered by the related dual storage mode dimension
tables, which can be joined to the fact table to achieve efficient filtering and grouping.
Consider the following data model design. Three dimension tables, Owner, Account,
and Campaign have a striped upper border, which means they're set to dual storage
mode.
For more information on table storage modes including dual storage, see Manage
storage mode in Power BI Desktop.
Enable single-sign on
When you publish a DirectQuery model to the Power BI service, you can use the dataset
settings to enable single sign-on (SSO) by using Azure Active Directory (Azure AD)
OAuth2 for your report users. You should enable this option when Dataverse queries
must execute in the security context of the report user.
When the SSO option is enabled, Power BI sends the report user's authenticated Azure
AD credentials in the queries to Dataverse. This option enables Power BI to honor the
security settings that are set up in the data source.
For more information, see Single sign-on (SSO) for DirectQuery sources.
Consider an example of how the Dynamics 365 My Active Accounts view includes a filter
where Owner equals current user.
You can reproduce this result in Power Query by using a native query that embeds the
CURRENT_USER token.
Consider the following example that shows a native query that returns the accounts for
the current user. In the WHERE clause, notice that the ownerid column is filtered by the
CURRENT_USER token.
Power Query M
let
Source = CommonDataService.Database("demo.crm.dynamics.com",
[CreateNavigationProperties=false],
dbo_account = Value.NativeQuery(Source, "
SELECT
accountid, accountnumber, ownerid, address1_city,
address1_stateorprovince, address1_country
FROM account
WHERE statecode = 0
AND ownerid = CURRENT_USER
", null, [EnableFolding]=true])
in
dbo_account
When you publish the model to the Power BI service, you must enable single sign-on
(SSO) so that Power BI will send the report user's authenticated Azure AD credentials to
Dataverse.
Next steps
For more information related to this article, check out the following resources.
This article targets Azure Analysis Services (AAS) data modelers and administrators. It
provides them with guidance and rationale to help migrate their AAS databases to
Power BI Premium or Power BI Embedded.
Background
Power BI has evolved into the leading platform for both self-service and IT-managed
enterprise business intelligence (BI). With exponential growth in data volumes and
complexity, Power BI customers demand enterprise BI solutions that scale to petabytes,
are secure, easy to manage, and accessible to all users across the largest of
organizations.
For over two decades, Microsoft has continued to make deep investments in enterprise
BI. AAS and SQL Server Analysis Services (SSAS) are based on mature BI data modeling
technology used by countless enterprises. Today, that same technology is also at the
heart of Power BI datasets.
7 Note
In this article, the terms data model, semantic model, BI model, tabular model,
database, and Power BI dataset have the same meaning. This article commonly uses
the terms data model for AAS model and dataset for Power BI model. This article
describes the process of migrating to Power BI Premium but this also applies to
Power BI Embedded.
In recent years, Microsoft has taken great strides to deliver AAS capabilities to Power BI
Premium . To that end, Power BI instantly inherited a large ecosystem of developers,
partners, BI tools, and solutions that were built up over decades. Today, the full set of
Power BI Premium workloads, features, and capabilities now results in a modern, cloud
BI platform that goes far beyond comparable functionality available in AAS or SSAS.
Today, many customers have Power BI reports that live connect to AAS. Naturally, these
customers are asking whether there's an opportunity to consolidate by hosting their
data models alongside their reports in Power BI. They often ask questions like:
Does all the AAS functionality we depend on work in Power BI?
Is Power BI backwards compatible with AAS tools and processes?
What capabilities are available only in Power BI?
How do we compare costs between AAS and Power BI?
Why is Microsoft converging enterprise and self-service BI?
How do we migrate from AAS to Power BI Premium?
Is AAS marked for deprecation?
What's Microsoft's roadmap for enterprise data models?
7 Note
To be clear, currently there aren't any plans to deprecate AAS. There is a priority to
focus investment on Power BI Premium for enterprise data modeling, and so the
additional value provided by Power BI Premium will increase over time. Customers
who choose Power BI Premium can expect to benefit from alignment with the
Microsoft BI product roadmap.
Power BI Premium
Thanks to its distributed architecture, Power BI Premium is less sensitive to overall load,
temporal spikes, and high concurrency. By consolidating capacities to larger Power BI
Premium SKUs, customers can achieve increased performance and throughput.
Scalability benefits associated with Power BI Premium are described later in this article.
Feature comparison
AAS provides the Analysis Services database engine for hosting data models, which is a
core component of a Microsoft enterprise BI architecture. In fact, Power BI Premium is a
superset of AAS because it provides much more functionality. The following table lists
features supported in AAS and Power BI Premium. The table focuses on - but isn't
limited to - Power BI dataset-related capabilities.
Premium workloads
Paginated reports, which are ideal for reports that are designed to be printed, No Yes
especially when table data overflows to multiple pages
Dataflows, which store fragments of data intended for use in a Power BI No Yes
dataset
AI with dataflows, which use artificial intelligence (AI) with Cognitive Services, No Yes
Automated Machine Learning, and Azure Machine Learning (AML) integration
Metrics, which curate key business measures and allow tracking them against No Yes
objectives
Business enablement
Business continuity and disaster recovery (BCDR) with Azure regions and No Yes
availability zones
Hybrid tables, which comprise in-memory and DirectQuery partitions that can No Yes
help deliver near real-time results over large tables
User-defined aggregations, which can improve query performance over very No Yes
large DirectQuery tables
Query scale-out, which distributes client queries among replicated servers Yes Yes
Security
Bring Your Own Key (BYOK), which allows customers to use their own No Yes
encryption key to encrypt data stored in the Microsoft cloud
Azure Private Link, which provides secure access for data traffic in Power BI No Yes
Single sign-on (SSO) for DirectQuery sources, which allows connecting to data No Yes
sources by using the report user's identity
Row-level security (RLS), which restricts access to specific rows of data for Yes Yes
specific users
Object-level security (OLS), which restricts access to specific tables or columns Yes Yes
for specific users
Firewall, which when enabled, allows setting allowed IP address ranges Yes No 1
Governance
Microsoft Purview integration, which helps customers manage and govern No Yes
Power BI items
Microsoft Information Protection (MIP) sensitivity labels and integration with No Yes
Microsoft Defender for Cloud Apps for data loss prevention
Semantic modeling
Composite models including using DirectQuery for Power BI datasets and AAS No Yes
Model management
Enhanced refresh, which allows any programming language to perform Yes Yes
asynchronous dataset refreshes by using a REST API call
Server properties, which control Analysis Services server instance properties Yes Yes
Alias server names, which allow connecting to an Analysis Services server Yes No
instance by using a shorter alias
XMLA endpoint enabled APIs for scripting and compatibility with services for Yes Yes
automation and ALM including Azure Functions, Azure Automation and Azure
DevOps
Connectivity
XMLA endpoint, which allows open-platform connectivity for data model Yes Yes
consumption and visualization tools, including third-party tools
Multi-Geo feature, which helps multinational customers address regional, Yes Yes
industry-specific, or organizational data residency requirements
Discoverability
Data hub integration, which helps users discover, explore, and use Power BI No Yes
datasets
Feature AAS Power BI
Premium
Data lineage view and dataset impact analysis, which help users understand No Yes
and assess Power BI item dependencies
Premium capacity metrics app, which provides monitoring capabilities for No Yes
Power BI capacities
Power BI audit log, which tracks user activities across Power BI and Microsoft No Yes
365
Azure Log Analytics (LA) integration, which allows administrators to configure a Yes Yes
Log Analytics connection for a Power BI workspace
Metric alerts in Azure Monitor, which provide a way to get notified when one Yes No
of your multi-dimensional metrics crosses a threshold
XMLA endpoint, which allows diagnostic logging tool connections, including Yes Yes
SQL Server Profiler
SQL Server Extended Events (xEvents), which is a light-weight tracing and Yes No
performance monitoring system useful for diagnosing issues
1
Use VNet connectivity and Azure Private Link instead
Cost comparison
When comparing Power BI Premium to AAS costs, be sure to consider factors beyond
price per core. Power BI provides reduced cost of ownership and business value, and
with many features that are only available to Power BI data models.
Also, assuming you already use Power BI in your organization, calculate costs based on
the existing profile that combines AAS and Power BI. Compare the existing profile with
the target profile on Power BI Premium. To determine the target profile, be sure to
consider the following points:
Region requirements.
The largest AAS data model size in each region.
The number of users in each region.
The number of users required to develop and manage content.
CPU consumption across AAS and Power BI Premium.
) Important
CPU consumption across AAS and Power BI Premium may vary significantly due to
numerous factors. Factors can include the use of other workloads on the same
capacities, refresh patterns, and query patterns. We recommended that you
perform in-depth analysis to quantify comparative CPU consumption across AAS
and Power BI Premium for migrated models.
Tip
To help determine the right type and number of licenses for your business
requirements and circumstances, see this related article.
Consolidation opportunity
Many AAS customers already have Power BI reports that connect to AAS. So, migration
to Power BI can represent an opportunity to consolidate BI items in Power BI Premium.
Consolidation makes the larger sized Premium SKUs more economically viable and can
help to provide higher levels of throughput and scalability.
PPU licenses
The Premium Per User (PPU) license is a per-user license that provides a lower-cost price
point for Premium. PPU licenses are typically purchased by small and medium-sized
companies. They support all the Premium capabilities for data modeling listed earlier.
Tip
Pro licenses
A Pro (or PPU) license is required to publish and manage Power BI content. Pro licenses
are typically assigned to developers and administrators, not end users.
Power BI pricing
Azure Analysis Services pricing
Purchase A SKUs for testing and other scenarios
Scalability benefits
Power BI Premium delivers scalability, performance, and cost-of-ownership benefits not
available in AAS.
Power BI Premium provides features that enable fast interactive analysis over big data.
Such features include aggregations, composite models, and hybrid tables. Each feature
offers a different way to optimally combine import and DirectQuery storage modes,
effectively reducing memory use. AAS, on the other hand, doesn't support these
capabilities; the entire data model uses either import or DirectQuery storage mode.
Power BI Premium limits memory per dataset, and not per capacity or server. Conversely,
AAS requires all data models fit in memory on a single server. That requirement can
compel customers with large data models to purchase larger SKU sizes.
Thanks to the distributed nature of the Premium architecture, more datasets can be
refreshed in parallel. Performing concurrent refreshes on the same AAS server can lead
to refresh errors due to exceeding server memory limits.
In Power BI Premium, CPU consumption during refresh is spread across 24-hour periods.
Power BI Premium evaluates capacity throughput to provide resilience to temporal
spikes in demand for compute resources. When necessary, it can delay refreshes until
sufficient resources become available. This automatic behavior reduces the need for
customers to perform detailed analysis and manage automation scripts to scale servers
up or down. Premium customers should decide on the optimal SKU size for their overall
CPU consumption requirements.
Another advantage of Power BI Premium is that it's able to dynamically balance the
datasets depending on the load of the system. This automatic behavior ensures
busy/active datasets get the necessary memory and CPU resources, while more idle
datasets can be evicted or migrated to other nodes. Datasets are candidates for eviction
when they're not used. They'll be loaded on-demand so that only the required data is
loaded into memory without having to load the whole dataset. On the other hand, AAS
requires all data models be fully loaded in memory always. This requirement means
queries to AAS can rely on the data model being available, but – especially for Power BI
capacities with a high number of data models when some of them are used infrequently
– dynamic memory management can make more efficient use of memory.
Permissions
AAS and SSAS use roles to manage data model access. There are two types of roles: the
server role and database roles. The server role is a fixed role that grants administrator
access to the Analysis Services server instance. Database roles, which are set by data
modelers and administrators, control access to the database and data for non-
administrator users.
Unlike AAS, in Power BI, you only use roles to enforce RLS or OLS. To grant permissions
beyond RLS and OLS, use the Power BI security model (workspace roles and dataset
permissions). For more information, see Dataset permissions.
For more information about Power BI model roles, see Dataset connectivity with the
XMLA endpoint (Model roles).
When you migrate a data model from AAS to Power BI Premium, you must take the
following points into consideration:
Users who were granted Read permission on a model in AAS must be granted
Build permission on the migrated Power BI dataset.
Users who were granted the Administrator permission on a model in AAS must be
granted Write permission on the migrated Power BI dataset.
Refresh automation
Power BI Premium supports XMLA endpoint-enabled APIs for scripting, such as Tabular
Model Scripting Language (TMSL), Tabular Object Model (TOM), and the PowerShell
SqlServer module . These APIs have almost symmetric interfaces to AAS. For more
information, see Dataset connectivity with the XMLA endpoint (Client applications and
tools).
Generally, scripts and processes that automate partition management and processing
in AAS will work in Power BI Premium. Bear in mind that Power BI Premium datasets
support the incremental refresh feature, which provides automated partition
management for tables that frequently load new and updated data.
Like for AAS, you can use a service principal as an automation account for Power BI
dataset management operations, such as refreshes. For more information, see Dataset
connectivity with the XMLA endpoint (Service principals).
Custom security
Like for AAS, applications can use a service principal to query a Power BI Premium per
capacity or Power BI Embedded dataset by using the CustomData feature.
However, you can't assign a service principal to a model role in Power BI Premium.
Instead, a service principal gains access by assignment to the workspace admin or
member role.
7 Note
You can't use the CustomData feature when querying Premium Per User (PPU)
datasets because it would be in violation of the license terms and conditions.
Network security
Setting up network security in AAS requires enabling the firewall and configuring IP
address ranges for only those computers accessing the server.
Power BI doesn't have a firewall feature. Instead, Power BI offers a superior network
security model by using VNets and Private Links. For more information, see What is a
virtual network (VNet)?.
Any XMLA-based process that sets data source credentials must be replaced. For more
information, see Dataset connectivity with the XMLA endpoint (Deploy model projects
from Visual Studio).
For more information, see Backup and restore datasets with Power BI Premium.
For information on how to set up gateway data sources for Power BI Premium, see Add
or remove a gateway data source.
Server properties
Unlike AAS, Power BI Premium doesn't support server properties. Instead, you manage
Premium capacity settings.
Link files
Unlike AAS, Power BI Premium doesn't support alias server names.
PowerShell
You can use the SqlServer PowerShell module AAS cmdlets to automate dataset
management tasks, including refresh operations. For more information, see Analysis
Services PowerShell Reference.
However, the Az.AnalysisServices module AAS cmdlets aren't supported for Power BI
datasets. Instead, use the Microsoft Power BI Cmdlets for Windows PowerShell and
PowerShell Core.
Diagnostic logging
AAS integrates with Azure Monitor for diagnostic logging. The most common target for
AAS logs is to Log Analytics workspaces.
Power BI Premium also supports logging to Log Analytics workspaces. Currently, the
events sent to Log Analytics are mainly AS engine events. However, not all events
supported for AAS are supported for Power BI. The Log Analytics schema for Power BI
contains differences compared to AAS, which means existing queries on AAS may not
work in Power BI.
Power BI offers another diagnostic logging capability that isn't offered in AAS. For more
information, see Use the Premium metrics app.
SQL Server Extended Events (xEvents) are supported in AAS but not in Power BI
Premium. For more information, see Monitor Analysis Services with SQL Server Extended
Events.
Business-to-business (B2B)
Both AAS and Power BI support Azure AD B2B collaboration, which enables and governs
sharing with external users. Notably, the User Principal Name (UPN) format required by
AAS is different to Power BI.
To identify the user, Power BI utilizes a unique name claim in Azure AD while AAS uses
an email claim. While there may be many instances where these two identifiers align, the
unique name format is more stringent. If using dynamic RLS in Power BI, ensure that the
value in the user identity table matches the account used to sign in to Power BI.
Scale-out
Azure Analysis Services scale-out is supported by Power BI Premium. For more
information see Power BI Dataset Scale Out.
Migration feature
The Microsoft Azure Analysis Services to Microsoft Power BI Premium migration feature
in Power BI migrates as AAS database to a dataset in Power BI Premium, Power BI
Premium Per User, or Power BI Embedded workspace. For more information, see Migrate
Azure Analysis Services to Power BI.
Next steps
For more information about this article, check out the following resources:
Power BI partners are available to help your organization succeed with the migration
process. To engage a Power BI partner, visit the Power BI partner portal .
Migrate from Azure Analysis Services to
Power BI Premium: Migration scenarios
Article • 02/27/2023
This article compares six hypothetical scenarios when migrating from Azure Analysis
Services (AAS) to Power BI Premium. These scenarios can help you to determine the
right type and number of licenses for your business requirements and circumstances.
7 Note
An attempt has been made to ensure these scenarios are representative of real
customer migrations, however individual customer scenarios will of course differ.
Also, this article doesn't include pricing details. You can find current pricing here:
Power BI pricing
Azure Analysis Services pricing
When comparing Power BI Premium to AAS costs, be sure to consider factors beyond
price per core. Power BI provides reduced cost of ownership and business value, and
with many features that are only available to Power BI data models.
Also, assuming you already use Power BI in your organization, calculate costs based on
the existing profile that combines AAS and Power BI. Compare the existing profile with
the target profile on Power BI Premium. To determine the target profile, be sure to
consider the following points:
Region requirements.
The largest AAS data model size in each region.
The number of users in each region.
The number of users required to develop and manage content.
CPU consumption across AAS and Power BI Premium.
) Important
CPU consumption across AAS and Power BI Premium may vary significantly due to
numerous factors. Factors can include the use of other workloads on the same
capacities, refresh patterns, and query patterns. We recommended that you
perform in-depth analysis to quantify comparative CPU consumption across AAS
and Power BI Premium for migrated models.
Migration scenario 1
In the first migration scenario, the customer uses Power BI Premium for the frontend
and AAS for the backend. There are 20 developers who are each responsible for the
development and test environments, and for deployment to production.
Production 60 GB S4
Production 30 GB S2
Production 15 GB S1
Test 5 GB B1
Development 1 GB D1
Test/development Premium P1 20
Production/test/development Pro 20
Production/test/development PPU 20 5 GB
Migration scenario 2
In this migration scenario, the customer uses Power BI Premium for the frontend and
AAS for the backend. Production environments are running in different regions. There
are 20 developers who are each responsible for the development and test environments,
and for deployment to production.
West US Production 15 GB S1
West US Test 5 GB B1
West US Development 1 GB D1
The customer needs a Premium capacity in each of the three regions (because the
three existing production AAS models run in different regions). Each capacity size
is based on the largest model.
The 20 developers will need PPU licenses to access test models above 1 GB in size.
Migration scenario 3
In this migration scenario, the customer has Power BI Pro licenses for all users available
with their Office 365 E5 subscription, and they use AAS for the backend. There are 15
developers who are each responsible for the development and test environments, and
for deployment to production.
Production 35 GB S2
Production 30 GB S2
Test 5 GB B1
Development 1 GB D1
The two existing production AAS models can be consolidated to run in a Premium
P2 capacity.
The 15 developers will need PPU licenses to access test models above 1 GB in size.
(An add-on is available to step up from Pro to PPU.)
Production/test/development PPU 15 5 GB
Migration scenario 4
In this migration scenario, the customer has Power BI Pro licenses for all users, and they
use AAS for the backend. There are five developers who are each responsible for the
development and test environments, and for deployment to production.
Production 35 GB S2
Production 10 GB S1
Test 5 GB B1
Development 1 GB D1
Production/test/development Pro 5
The two existing production AAS models can run in PPU workspaces.
All end users and developers will need PPU licenses.
Production/test/development PPU 5 5 GB
Migration scenario 5
In this migration scenario, the customer uses Power BI Premium for the frontend and
AAS for the backend. There are 25 developers who are each responsible for the
development and test environments, and for deployment to production.
Production 220 GB S9
Production 150 GB S8
Production 60 GB S4
Test 5 GB B1
Development 1 GB D1
Test/development Premium P1 25
Production/test/development Pro 25
Production/test/development PPU 25 5 GB
Migration scenario 6
In this migration scenario, an ISV company has 400 customers. Each customer has its
own SQL Server Analysis Services (SSAS) multidimensional model (also known as a cube).
The analysis below compares Azure Analysis Services with the Power BI Embedded
alternative.
The 400 tenants are mainly accessed by 50 analysts from the ISV company as well
as two users (on average) from each customer.
The total size of the models is about 100 GB.
Production 8 GB S4
Test 8 GB B1
Development 1 GB D1
Analysts Pro 50
Developers Pro 20
The A1/P4 SKU was chosen to allow for future model size growth (EM3/A3 SKU can
work also).
The 50 analysts will need PPU licenses to access test models above 1 GB in size.
The total size of the 400 models isn't relevant for pricing; only the largest model
size is important.
Next steps
For more information about this article, check out the following resources:
Power BI partners are available to help your organization succeed with the migration
process. To engage a Power BI partner, visit the Power BI partner portal .
Appropriate use of error functions
Article • 09/20/2022
As a data modeler, when you write a DAX expression that might raise an evaluation-time
error, you can consider using two helpful DAX functions.
The ISERROR function, which takes a single expression and returns TRUE if that
expression results in error.
The IFERROR function, which takes two expressions. Should the first expression
result in error, the value for the second expression is returned. It is in fact a more
optimized implementation of nesting the ISERROR function inside an IF function.
However, while these functions can be helpful and can contribute to writing easy-to-
understand expressions, they can also significantly degrade the performance of
calculations. It can happen because these functions increase the number of storage
engine scans required.
Most evaluation-time errors are due to unexpected BLANKs or zero values, or invalid
data type conversion.
Recommendations
It's better to avoid using the ISERROR and IFERROR functions. Instead, apply defensive
strategies when developing the model and writing expressions. Strategies can include:
Ensuring quality data is loaded into the model: Use Power Query transformations
to remove or substitute invalid or missing values, and to set correct data types. A
Power Query transformation can also be used to filter rows when errors, like invalid
data conversion, occur.
Data quality can also be controlled by setting the model column Is Nullable
property to Off, which will fail the data refresh should BLANKs be encountered. If
this failure occurs, data loaded as a result of a successful refresh will remain in the
tables.
Using the IF function: The IF function logical test expression can determine
whether an error result would occur. Note, like the ISERROR and IFERROR
functions, this function can result in additional storage engine scans, but will likely
perform better than them as no error needs to be raised.
Using error-tolerant functions: Some DAX functions will test and compensate for
error conditions. These functions allow you to enter an alternate result that would
be returned instead. The DIVIDE function is one such example. For additional
guidance about this function, read the DAX: DIVIDE function vs divide operator (/)
article.
Example
The following measure expression tests whether an error would be raised. It returns
BLANK in this instance (which is the case when you do not provide the IF function with a
value-if-false expression).
DAX
Profit Margin
= IF(ISERROR([Profit] / [Sales]))
This next version of the measure expression has been improved by using the IFERROR
function in place of the IF and ISERROR functions.
DAX
Profit Margin
= IFERROR([Profit] / [Sales], BLANK())
However, this final version of the measure expression achieves the same outcome, yet
more efficiently and elegantly.
DAX
Profit Margin
= DIVIDE([Profit], [Sales])
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
Get help at Microsoft Q&A
Avoid converting BLANKs to values
Article • 09/20/2022
As a data modeler, when writing measure expressions you might come across cases
where a meaningful value can't be returned. In these instances, you may be tempted to
return a value—like zero—instead. It's suggested you carefully determine whether this
design is efficient and practical.
Consider the following measure definition that explicitly converts BLANK results to zero.
DAX
Consider another measure definition that also converts BLANK results to zero.
DAX
Profit Margin =
DIVIDE([Profit], [Sales], 0)
The DIVIDE function divides the Profit measure by the Sales measure. Should the result
be zero or BLANK, the third argument—the alternate result (which is optional)—is
returned. In this example, because zero is passed as the alternate result, the measure is
guaranteed to always return a value.
These measure designs are inefficient and lead to poor report designs.
When they're added to a report visual, Power BI attempts to retrieve all groupings within
the filter context. The evaluation and retrieval of large query results often leads to slow
report rendering. Each example measure effectively turns a sparse calculation into a
dense one, forcing Power BI to use more memory than necessary.
Let's see what happens when the Profit Margin measure is added to a table visual,
grouping by customer.
The table visual displays an overwhelming number of rows. (There are in fact 18,484
customers in the model, and so the table attempts to display all of them.) Notice that
the customers in view haven't achieved any sales. Yet, because the Profit Margin
measure always returns a value, they are displayed.
7 Note
When there are too many data points to display in a visual, Power BI may use data
reduction strategies to remove or summarize large query results. For more
information, see Data point limits and strategies by visual type.
Let's see what happens when the Profit Margin measure definition is improved. It now
returns a value only when the Sales measure isn't BLANK (or zero).
DAX
Profit Margin =
DIVIDE([Profit], [Sales])
The table visual now displays only customers who have made sales within the current
filter context. The improved measure results in a more efficient and practical experience
for your report users.
Tip
When necessary, you can configure a visual to display all groupings (that return
values or BLANK) within the filter context by enabling the Show Items With No
Data option.
Recommendation
It's recommended that your measures return BLANK when a meaningful value cannot be
returned.
This design approach is efficient, allowing Power BI to render reports faster. Also,
returning BLANK is better because report visuals—by default—eliminate groupings
when summarizations are BLANK.
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
As a data modeler, it's common you'll write DAX expressions that need to be evaluated
in a modified filter context. For example, you can write a measure definition to calculate
sales for "high margin products". We'll describe this calculation later in this article.
7 Note
This article is especially relevant for model calculations that apply filters to Import
tables.
The CALCULATE and CALCULATETABLE DAX functions are important and useful
functions. They let you write calculations that remove or add filters, or modify
relationship paths. It's done by passing in filter arguments, which are either Boolean
expressions, table expressions, or special filter functions. We'll only discuss Boolean and
table expressions in this article.
Consider the following measure definition, which calculates red product sales by using a
table expression. It will replace any filters that might be applied to the Product table.
DAX
Red Sales =
CALCULATE(
[Sales],
FILTER('Product', 'Product'[Color] = "Red")
)
The CALCULATE function accepts a table expression returned by the FILTER DAX
function, which evaluates its filter expression for each row of the Product table. It
achieves the correct result—the sales result for red products. However, it could be
achieved much more efficiently by using a Boolean expression.
Here's an improved measure definition, which uses a Boolean expression instead of the
table expression. The KEEPFILTERS DAX function ensures any existing filters applied to
the Color column are preserved, and not overwritten.
DAX
Red Sales =
CALCULATE(
[Sales],
KEEPFILTERS('Product'[Color] = "Red")
)
It's recommended you pass filter arguments as Boolean expressions, whenever possible.
It's because Import model tables are in-memory column stores. They are explicitly
optimized to efficiently filter columns in this way.
There are, however, restrictions that apply to Boolean expressions when they're used as
filter arguments. They:
It means that you'll need to use table expressions for more complex filter requirements.
Consider now a different measure definition. The requirement is to calculate sales, but
only for months that have achieved a profit.
DAX
In this example, the FILTER function must be used. It's because it requires evaluating the
Profit measure to eliminate those months that didn't achieve a profit. It's not possible to
use a measure in a Boolean expression when it's used as a filter argument.
Recommendations
For best performance, it's recommended you use Boolean expressions as filter
arguments, whenever possible.
Therefore, the FILTER function should only be used when necessary. You can use it to
perform filter complex column comparisons. These column comparisons can involve:
Measures
Other columns
Using the OR DAX function, or the OR logical operator (||)
See also
Filter functions (DAX)
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
As a data modeler, your DAX expressions will refer to model columns and measures.
Columns and measures are always associated with model tables, but these associations
are different, so we have different recommendations on how you'll reference them in
your expressions.
Columns
A column is a table-level object, and column names must be unique within a table. So
it's possible that the same column name is used multiple times in your model—
providing they belong to different tables. There's one more rule: a column name cannot
have the same name as a measure name or hierarchy name that exists in the same table.
In general, DAX will not force using a fully qualified reference to a column. A fully
qualified reference means that the table name precedes the column name.
Here's an example of a calculated column definition using only column name references.
The Sales and Cost columns both belong to a table named Orders.
DAX
The same definition can be rewritten with fully qualified column references.
DAX
Sometimes, however, you'll be required to use fully qualified column references when
Power BI detects ambiguity. When entering a formula, a red squiggly and error message
will alert you. Also, some DAX functions like the LOOKUPVALUE DAX function, require
the use of fully qualified columns.
It's recommended you always fully qualify your column references. The reasons are
provided in the Recommendations section.
Measures
A measure is a model-level object. For this reason, measure names must be unique
within the model. However, in the Fields pane, report authors will see each measure
associated with a single model table. This association is set for cosmetic reasons, and
you can configure it by setting the Home Table property for the measure. For more
information, see Measures in Power BI Desktop (Organizing your measures).
It's possible to use a fully qualified measure in your expressions. DAX intellisense will
even offer the suggestion. However, it isn't necessary, and it's not a recommended
practice. If you change the home table for a measure, any expression that uses a fully
qualified measure reference to it will break. You'll then need to edit each broken formula
to remove (or update) the measure reference.
It's recommended you never qualify your measure references. The reasons are provided
in the Recommendations section.
Recommendations
Our recommendations are simple and easy to remember:
Here's why:
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
When using the DIVIDE function, you must pass in numerator and denominator
expressions. Optionally, you can pass in a value that represents an alternate result.
DAX
The DIVIDE function was designed to automatically handle division by zero cases. If an
alternate result is not passed in, and the denominator is zero or BLANK, the function
returns BLANK. When an alternate result is passed in, it's returned instead of BLANK.
The DIVIDE function is convenient because it saves your expression from having to first
test the denominator value. The function is also better optimized for testing the
denominator value than the IF function. The performance gain is significant since
checking for division by zero is expensive. Further using DIVIDE results in a more concise
and elegant expression.
Example
The following measure expression produces a safe division, but it involves using four
DAX functions.
DAX
Profit Margin =
IF(
OR(
ISBLANK([Sales]),
[Sales] == 0
),
BLANK(),
[Profit] / [Sales]
)
This measure expression achieves the same outcome, yet more efficiently and elegantly.
DAX
Profit Margin =
DIVIDE([Profit], [Sales])
Recommendations
It's recommended that you use the DIVIDE function whenever the denominator is an
expression that could return zero or BLANK.
In the case that the denominator is a constant value, we recommend that you use the
divide operator. In this case, the division is guaranteed to succeed, and your expression
will perform better because it will avoid unnecessary testing.
Carefully consider whether the DIVIDE function should return an alternate value. For
measures, it's usually a better design that they return BLANK. Returning BLANK is better
because report visuals—by default—eliminate groupings when summarizations are
BLANK. It allows the visual to focus attention on groups where data exists. When
necessary, in Power BI, you can configure the visual to display all groups (that return
values or BLANK) within the filter context by enabling the Show items with no data
option.
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
As a data modeler, sometimes you might need to write a DAX expression that counts
table rows. The table could be a model table or an expression that returns a table.
Your requirement can be achieved in two ways. You can use the COUNT function to
count column values, or you can use the COUNTROWS function to count table rows.
Both functions will achieve the same result, providing that the counted column contains
no BLANKs.
DAX
Sales Orders =
COUNT(Sales[OrderDate])
Providing that the granularity of the Sales table is one row per sales order, and the
OrderDate column does not contain BLANKs, then the measure will return a correct
result.
DAX
Sales Orders =
COUNTROWS(Sales)
There are three reasons why the second measure definition is better:
Recommendation
When it's your intention to count table rows, it's recommended you always use the
COUNTROWS function.
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
As a data modeler, sometimes you might need to write a DAX expression that tests
whether a column is filtered by a specific value.
In earlier versions of DAX, this requirement was safely achieved by using a pattern
involving three DAX functions; IF, HASONEVALUE and VALUES. The following measure
definition presents an example. It calculates the sales tax amount, but only for sales
made to Australian customers.
DAX
In the example, the HASONEVALUE function returns TRUE only when a single value of
the Country-Region column is visible in the current filter context. When it's TRUE, the
VALUES function is compared to the literal text "Australia". When the VALUES function
returns TRUE, the Sales measure is multiplied by 0.10 (representing 10%). If the
HASONEVALUE function returns FALSE—because more than one value filters the column
—the first IF function returns BLANK.
The use of the HASONEVALUE is a defensive technique. It's required because it's
possible that multiple values filter the Country-Region column. In this case, the VALUES
function returns a table of multiple rows. Comparing a table of multiple rows to a scalar
value results in an error.
Recommendation
It's recommended that you use the SELECTEDVALUE function. It achieves the same
outcome as the pattern described in this article, yet more efficiently and elegantly.
Using the SELECTEDVALUE function, the example measure definition is now rewritten.
DAX
Australian Sales Tax =
IF(
SELECTEDVALUE(Customer[Country-Region]) = "Australia",
[Sales] * 0.10
)
Tip
It's possible to pass an alternate result value into the SELECTEDVALUE function. The
alternate result value is returned when either no filters—or multiple filters—are
applied to the column.
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
As a data modeler, writing and debugging some DAX calculations can be challenging.
It's common that complex calculation requirements often involve writing compound or
complex expressions. Compound expressions can involve the use of many nested
functions, and possibly the reuse of expression logic.
Using variables in your DAX formulas can help you write more complex and efficient
calculations. Variables can improve performance, reliability, readability, and reduce
complexity.
In this article, we'll demonstrate the first three benefits by using an example measure for
year-over-year (YoY) sales growth. (The formula for YoY sales growth is period sales,
minus sales for the same period last year, divided by sales for the same period last year.)
DAX
The measure produces the correct result, yet let's now see how it can be improved.
Improve performance
Notice that the formula repeats the expression that calculates "same period last year".
This formula is inefficient, as it requires Power BI to evaluate the same expression twice.
The measure definition can be made more efficient by using a variable, VAR.
DAX
Sales YoY Growth % =
VAR SalesPriorYear =
CALCULATE([Sales], PARALLELPERIOD('Date'[Date], -12, MONTH))
RETURN
DIVIDE(([Sales] - SalesPriorYear), SalesPriorYear)
The measure continues to produce the correct result, and does so in about half the
query time.
Improve readability
In the previous measure definition, notice how the choice of variable name makes the
RETURN expression simpler to understand. The expression is short and self-describing.
Simplify debugging
Variables can also help you debug a formula. To test an expression assigned to a
variable, you temporarily rewrite the RETURN expression to output the variable.
The following measure definition returns only the SalesPriorYear variable. Notice how it
comments-out the intended RETURN expression. This technique allows you to easily
revert it back once your debugging is complete.
DAX
Reduce complexity
In earlier versions of DAX, variables were not yet supported. Complex expressions that
introduced new filter contexts were required to use the EARLIER or EARLIEST DAX
functions to reference outer filter contexts. Unfortunately, data modelers found these
functions difficult to understand and use.
Variables are always evaluated outside the filters your RETURN expression applies. For
this reason, when you use a variable within a modified filter context, it achieves the
same result as the EARLIEST function. The use of the EARLIER or EARLIEST functions can
therefore be avoided. It means you can now write formulas that are less complex, and
that are easier to understand.
Consider the following calculated column definition added to the Subcategory table. It
evaluates a rank for each product subcategory based on the Subcategory Sales column
values.
DAX
The EARLIER function is used to refer to the Subcategory Sales column value in the
current row context.
The calculated column definition can be improved by using a variable instead of the
EARLIER function. The CurrentSubcategorySales variable stores the Subcategory Sales
column value in the current row context, and the RETURN expression uses it within a
modified filter context.
DAX
See also
VAR DAX article
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Feedback
Was this page helpful? ツ Yes ト No
The Adventure Works DW 2020 Power BI Desktop sample model is designed to support
your DAX learning. The model is based on the Adventure Works data warehouse sample
for AdventureWorksDW2017—however, the data has been modified to suit the
objectives of the sample model.
The sample model does not contain any DAX formulas. It does however support
hundreds or even thousands of potential calculation formulas and queries. Some
function examples, like those in CALCULATE, DATESBETWEEN, DATESIN PERIOD, IF, and
LOOKUPVALUE can be added to the sample model without modification. We're working
on including more examples in other function reference articles that work with the
sample model.
Scenario
The Adventure Works company represents a bicycle manufacturer that sells bicycles and
accessories to global markets. The company has their data warehouse data stored in an
Azure SQL Database.
Model structure
The model has seven tables:
Table Description
Customer Describes customers and their geographic location. Customers purchase products
online (Internet sales).
Date There are three relationships between the Date and Sales tables, for order date, ship
date, and due date. The order date relationship is active. The company's reports sales
using a fiscal year that commences on July 1 of each year. The table is marked as a
date table using the Date column.
Table Description
Reseller Describes resellers and their geographic location. Reseller on sell products to their
customers.
Sales Stores rows at sales order line grain. All financial values are in US dollars (USD). The
earliest order date is July 1, 2017, and the latest order date is June 15, 2020.
Sales Describes sales order and order line numbers, and also the sales channel, which is
Order either Reseller or Internet. This table has a one-to-one relationship with the Sales
table.
Sales Sales territories are organized into groups (North America, Europe, and Pacific),
Territory countries, and regions. Only the United States sells products at the region level.
Download sample
Download the Power BI Desktop sample model file here .
See also
Learning path: Use DAX in Power BI Desktop
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Feedback
Was this page helpful? ツ Yes ト No
When creating a new Power BI Desktop solution, one of the first tasks you need to do is
"get data". Getting data can result in two distinctly different outcomes. It could:
This article is concerned with the second scenario. It provides guidance on whether a
report and model should be combined into a single Power BI Desktop file.
Data modelers can still use the Power BI Desktop report authoring experience to test
and validate their model designs. However, just after publishing their file to the Power BI
service they should remove the report from the workspace. And, they must remember to
remove the report each time they republish and overwrite the dataset.
So, manage model changes carefully. If possible, avoid the following changes:
If you must make breaking changes to your models, we recommend you either:
Both options allow you to quickly identify any related reports and dashboards. Data
lineage view is probably the better choice because it's easy to see the contact person for
each related item. In fact, it's a hyperlink that opens an email message addressed to the
contact.
We recommend you contact the owner of each related item to let them know of any
planned breaking changes. This way, they can be prepared and ready to fix and
republish their reports, helping to minimize downtime and frustration.
Next steps
For more information related to this article, check out the following resources:
This article targets you as a report author designing Power BI reports. It provides
suggestions and recommendations when creating report page tooltips.
Suggestions
Report page tooltips can enhance the experience for your report users. Page tooltips
allow your report users to quickly and efficiently gain deeper insights from a visual. They
can be associated with different report objects:
Visuals: On a visual-by-visual basis, you can configure which visuals will reveal your
page tooltip. Per visual, it's possible to have the visual reveal no tooltip, default to
the visual tooltips (configured in the visual fields pane), or use a specific page
tooltip.
Visual headers: You can configure specific visuals to display a page tooltip. Your
report users can reveal the page tooltip when they hover their cursor over the
visual header icon—be sure to educate your users about this icon.
7 Note
A report visual can only reveal a page tooltip when tooltip page filters are
compatible with the visual's design. For example, a visual that groups by product is
compatible with a tooltip page that filters by product.
Page tooltips don't support interactivity. If you want your report users to interact,
create a drillthrough page instead.
Different perspective
Add detail
Add help
Different perspective
A page tooltip can visualize the same data as the source visual. It's done by using the
same visual and pivoting groups, or by using different visual types. Page tooltips can
also apply different filters than those filters applied to the source visual.
The following example shows what happens when the report user hovers their cursor
over the EnabledUsers value. The filter context for the value is Yammer in November
2018.
A page tooltip is revealed. It presents a different data visual (line and clustered column
chart) and applies a contrasting time filter. Notice that the filter context for the data
point is November 2018. Yet the page tooltip displays trend over a full year of months.
Add detail
A page tooltip can display additional details and add context.
The following example shows what happens when the report user hovers their cursor
over the Average of Violation Points value, for zip code 98022.
A page tooltip is revealed. It presents specific attributes and statistics for zip code
98022.
Add help
Visual headers can be configured to reveal page tooltips to visual headers. You can add
help documentation to a page tooltip by using richly formatted text boxes. It's also
possible to add images and shapes.
Interestingly, buttons, images, text boxes, and shapes can also reveal a visual header
page tooltip.
The following example shows what happens when the report user hovers their cursor
over the visual header icon.
A page tooltip is revealed. It presents rich formatted text in four text boxes, and a shape
(line). The page tooltip conveys help by describing each acronym displayed in the visual.
Recommendations
At report design time, we recommend the following practices:
Page size: Configure your page tooltip to be small. You can use the built-in Tooltip
option (320 pixels wide, 240 pixels high). Or, you can set custom dimensions. Take
care not to use a page size that's too large—it can obscure the visuals on the
source page.
Page view: In report designer, set the page view to Actual Size (page view defaults
to Fit to Page). This way, you can see the true size of the page tooltip as you
design it.
Style: Consider designing your page tooltip to use the same theme and style as the
report. This way, users feel like they are in the same report. Or, design a
complimentary style for your tooltips, and be sure to apply this style to all page
tooltips.
Tooltip filters: Assign filters to the page tooltip so that you can preview a realistic
result as you design it. Be sure to remove these filters before you publish your
report.
Page visibility: Always hide tooltip pages—users shouldn't navigate directly to
them.
Next steps
For more information related to this article, check out the following resources:
This article targets you as a report author who designs Power BI reports. It provides
suggestions and recommendations when creating report page drillthrough.
It's recommended that you design your report to allow report users to achieve the
following flow:
Suggestions
We suggest that you consider two types of drillthrough scenarios:
Additional depth
Broader perspective
Additional depth
When your report page displays summarized results, a drillthrough page can lead report
users to transaction-level details. This design approach allows them to view supporting
transactions, and only when needed.
The following example shows what happens when a report user drills through from a
monthly sales summary. The drillthrough page contains a detailed list of orders for a
specific month.
Broader perspective
A drillthrough page can achieve the opposite of additional depth. This scenario is great
for drilling through to a holistic view.
The following example shows what happens when a report user drills through from a zip
code. The drillthrough page displays general information about that zip code.
Recommendations
At report design time, we recommend the following practices:
Style: Consider designing your drillthrough page to use the same theme and style
as the report. This way, users feel like they are in the same report.
Drillthrough filters: Set drillthrough filters so you can preview a realistic result as
you design the drillthrough page. Be sure to remove these filters before you
publish the report.
Additional capabilities: A drillthrough page is like any report page. You can even
enhance it with additional interactive capabilities, including slicers or filters.
Blanks: Avoid adding visuals that could display BLANK, or produce errors when
drillthrough filters are applied.
Page visibility: Consider hiding drillthrough pages. If you decide to keep a
drillthrough page visible, be sure to add a button that allows users to clear any
previously-set drillthrough filters. Assign a bookmark to the button. The bookmark
should be configured to remove all filters.
Back button: A back button is added automatically when you assign a drillthrough
filter. It's a good idea to keep it. This way, your report users can easily return to the
source page.
Discovery: Help promote awareness of a drillthrough page by setting visual header
icon text, or adding instructions to a text box. You can also design an overlay, as
described in this blog post .
Tip
It's also possible to configure drillthrough to your Power BI paginated reports. You
can do this be adding links to Power BI reports. Links can define URL parameters .
Next steps
For more information related to this article, check out the following resources:
This article targets you as a report author who designs reports for Power BI. It provides
suggestions to help you choose when to develop Power BI paginated reports.
Power BI paginated reports are optimized for printing, or PDF generation. They also
provide you with the ability to produce highly formatted, pixel-perfect layouts. So,
paginated reports are ideal for operational reports, like sales invoices.
In contrast, Power BI reports are optimized for exploration and interactivity. Also, they
can present your data using a comprehensive range of ultra-modern visuals. Power BI
reports, therefore, are ideal for analytic reports, enabling your report users to explore
data, and to discover relationships and patterns.
Legacy reports
When you already have SQL Server Reporting Services (SSRS) Report Definition
Language (RDL) reports, you can choose to redevelop them as Power BI reports, or
migrate them as paginated reports to Power BI. For more information, see Migrate SQL
Server Reporting Services reports to Power BI.
Once published to a Power BI workspace, paginated reports are available side by side
with Power BI reports. They can then be easily distributed using Power BI apps.
You might consider redeveloping SSRS reports, rather than migrating them. It's
especially true for those reports that are intended to deliver analytic experiences. In
these cases, Power BI reports will likely deliver better report user experiences.
Paginated report scenarios
There are many compelling scenarios when you might favor developing a Power BI
paginated report. Many are features or capabilities not supported by Power BI reports.
Print-ready: Paginated reports are optimized for printing, or PDF generation. When
necessary, data regions can expand and overflow to multiple pages in a controlled
way. Your report layouts can define margins, and page headers and footers.
Render formats: Power BI can render paginated reports in different formats.
Formats include Microsoft Excel, Microsoft Word, Microsoft PowerPoint, PDF, CSV,
XML, and MHTML. (The MHTML format is used by the Power BI service to render
reports.) Your report users can decide to export in the format that suits them.
Precision layout: You can design highly formatted, pixel-perfect layouts—to the
exact size and location configured in fractions of inches, or centimeters.
Dynamic layout: You can produce highly responsive layouts by setting many report
properties to use VB.NET expressions. Expressions have access to many core .NET
Framework libraries.
Render-specific layout: You can use expressions to modify the report layout based
on the rendering format applied. For example, you can design the report to disable
toggling visibility (to drill down and drill up) when it's rendered using a non-
interactive format, like PDF.
Native queries: You don't need to first develop a Power BI dataset. It's possible to
author native queries (or use stored procedures) for any supported data source.
Queries can include parameterization.
Graphic query designers: Power BI Report Builder includes graphic query
designers to help you write, and test, your dataset queries.
Static datasets: You can define a dataset, and enter data directly into your report
definition. This capability is especially useful to support a demo, or for delivering a
proof of concept (POC).
Data integration: You can combine data from different data sources, or with static
datasets. It's done by creating custom fields using VB.NET expressions.
Parameterization: You can design highly customized parameterization experiences,
including data-driven, and cascading parameters. It's also possible to define
parameter defaults. These experiences can be designed to help your report users
quickly set appropriate filters. Also, parameters don't need to filter report data;
they can be used to support "what-if" scenarios, or dynamic filtering or styling.
Image data: Your report can render images when they're stored in binary format in
a data source.
Custom code: You can develop code blocks of VB.NET functions in your report,
and use them in any report expression.
Subreports: You can embed other Power BI paginated reports (from the same
workspace) into your report.
Flexible data grids: You have fine-grained control of grid layouts by using the
tablix data region. It supports complex layouts, too, including nested and adjacent
groups. And, it can be configured to repeat headings when printed over multiple
pages. Also, it can embed a subreport or other visualizations, including data bars,
sparklines, and indicators.
Spatial data types: The map data region can visualize SQL Server spatial data
types. So, the GEOGRAPHY and GEOMETRY data types can be used to visualize
points, lines, or polygons. It's also possible to visualize polygons defined in ESRI
shape files.
Modern gauges: Radial and linear gauges can be used to display KPI values and
status. They can even be embedded into grid data regions, repeating within
groups.
HTML rendering: You can display richly formatted text when it's stored as HTML.
Mail merge: You can use text box placeholders to inject data values into text. This
way, you can produce a mail merge report.
Interactivity features: Interactive features include toggling visibility (to drill down
and drill up), links, interactive sorting, and tooltips. You can also add links that
drillthrough to Power BI reports, or other Power BI paginated reports. Links can
even jump to another location within the same report.
Subscriptions: Power BI can deliver paginated reports on a schedule as emails, with
report attachments in any supported format.
Per-user layouts: You can create responsive report layouts based on the
authenticated user who opens the report. You can design the report to filter data
differently, hide data regions or visualizations, apply different formats, or set user-
specific parameter defaults.
Next steps
For more information related to this article, check out the following resources:
This article targets you as a report author designing Power BI paginated reports. It
provides recommendations to help you design effective and efficient data retrieval.
If you can choose the data source type (possibly the case in a new project), we
recommend that you use cloud-based data sources. Paginated reports can connect with
lower network latency, especially when the data sources reside in the same region as
your Power BI tenant. Also, it's possible to connect to these sources by using Single
Sign-On (SSO). It means the report user's identity can flow to the data source, allowing
per-user row-level permissions to be enforced. Currently, SSO is only supported for on-
premises data sources SQL Server and Oracle (see Supported data sources for Power BI
paginated reports).
7 Note
While it's currently not possible to connect to on-premises databases using SSO,
you can still enforce row-level permissions. It's done by passing the UserID built-in
field to a dataset query parameter. The data source will need to store User Principal
Name (UPN) values in a way that it can correctly filter query results.
For example, consider that each salesperson is stored as a row in the Salesperson a
table. The table has columns for UPN, and also the salesperson's sales region. At
query time, the table is filtered by the UPN of the report user, and it's also related
to sales facts using an inner join. This way, the query effectively filters sales fact
rows to those of the report user's sales region.
Parameterization
Encapsulation of programming logic, allowing for more complex data preparation
(for example, temporary tables, cursors, or scalar user-defined functions)
Improved maintainability, allowing stored procedure logic to be easily updated. In
some cases, it can be done without the need to modify and republish paginated
reports (providing column names and data types remain unchanged).
Better performance, as their execution plans are cached for reuse
Reuse of stored procedures across multiple reports
In Power BI Report Builder, you can use the relational query designer to graphically
construct a query statement—but only for Microsoft data sources.
In Power BI Report Builder, you have a choice of two query designers: The Analysis
Services DAX query designer, and the Analysis Services MDX query designer. These
designers can be used for Power BI dataset data sources, or any SQL Server Analysis
Services or Azure Analysis Services model—tabular or multidimensional.
We suggest you use the DAX query designer—providing it entirely meets your query
needs. If the model doesn't define the measures you need, you'll need to switch to
query mode. In this mode, you can customize the query statement by adding
expressions (to achieve summarization).
The MDX query designer requires your model to include measures. The designer has
two capabilities not supported by the DAX query designer. Specifically, it allows you to:
To limit rows, you should always apply the most restrictive filters, and define aggregate
queries. Aggregate queries group and summarize source data to retrieve higher-grain
results. For example, consider that your report needs to present a summary of
salesperson sales. Instead of retrieving all sales order rows, create a dataset query that
groups by salesperson, and summarizes sales for each group.
Expression-based fields
It's possible to extend a report dataset with fields based on expressions. For example, if
your dataset sources customer first name and last name, you might want a field that
concatenates the two fields to produce the customer full name. To achieve this
calculation, you have two options. You can:
We recommend the latter option, whenever possible. There are two good reasons why
injecting expressions directly into your dataset query is better:
It's possible your data source is optimized to evaluate the expression more
efficiently than Power BI (it's especially the case for relational databases).
Report performance is improved because there's no need for Power BI to
materialize calculated fields prior to report rendering. Calculated fields can
noticeably extend report render time when datasets retrieve a large number of
rows.
Field names
When you create a dataset, its fields are automatically named after the query columns.
It's possible these names aren't friendly or intuitive. It's also possible that source query
column names contain characters prohibited in Report Definition Language (RDL) object
identifiers (like spaces and symbols). In this case, the prohibited characters are replaced
with an underscore character (_).
We recommend that you first verify that all field names are friendly, concise, yet still
meaningful. If not, we suggest you rename them before you commence the report
layout. It's because renamed fields don't ripple changes through to the expressions used
in your report layout. If you do decide to rename fields after you've commenced the
report layout, you'll need to find and update all broken expressions.
Filter vs parameter
It's likely that your paginated report designs will have report parameters. Report
parameters are commonly used to prompt your report user to filter the report. As a
paginated report author, you have two ways to achieve report filtering. You can map a
report parameter to:
A dataset filter, in which case the report parameter value(s) are used to filter the
data already retrieved by the dataset.
A dataset parameter, in which case the report parameter value(s) are injected into
the native query sent to the data source.
7 Note
All report datasets are cached on a per-session basis for up to 10 minutes beyond
their last use. A dataset can be re-used when submitting new parameter values
(filtering), rendering the report in a different format, or interacting with the report
design in some way, like toggling visibility, or sorting.
Consider, then, an example of a sales report that has a single report parameter to filter
the report by a single year. The dataset retrieves sales for all years. It does so because
the report parameter maps to the dataset filters. The report displays data for the
requested year, which is a subset of the dataset data. When the report user changes the
report parameter to a different year—and then views the report—Power BI doesn't need
to retrieve any source data. Instead, it applies a different filter to the already-cached
dataset. Once the dataset is cached, filtering can be very fast.
Now, consider a different report design. This time the report design maps the sales year
report parameter to a dataset parameter. This way, Power BI injects the year value into
the native query, and the dataset retrieves data only for that year. Each time the report
user changes the year report parameter value—and then views the report—the dataset
retrieves a new query result for just that year.
Both design approaches can filter report data, and both designs can work well for your
report designs. An optimized design, however, will depend on the anticipated volumes
of data, data volatility, and the anticipated behaviors of your report users.
We recommend dataset filtering when you anticipate a different subset of the dataset
rows will be reused many times (thereby saving rendering time because new data
doesn't need to be retrieved). In this scenario, you recognize that the cost of retrieving a
larger dataset can be traded off against the number of times it will be reused. So, it's
helpful for queries that are time consuming to generate. But take care—caching large
datasets on a per-user basis may negatively impact on performance, and capacity
throughput.
Data integration
If you need to combine data from multiple data sources, you have two options:
Combine report datasets: If the data sources are natively supported by paginated
reports, you can consider creating calculated fields that use the Lookup or
LookupSet Report Builder functions.
Develop a Power BI Desktop model: It's likely more efficient, however, that you
develop a data model in Power BI Desktop. You can use Power Query to combine
queries based on any supported data source. Once published to the Power BI
service, you can then develop a paginated report that connects to the Power BI
dataset.
Network latency
Network latency can impact report performance by increasing the time required for
requests to reach the Power BI service, and for responses to be delivered. Tenants in
Power BI are assigned to a specific region.
Tip
When users from a tenant access the Power BI service, their requests always route to this
region. As requests reach the Power BI service, the service may then send additional
requests—for example, to the underlying data source, or a data gateway—which are
also subject to network latency. In general, to minimize the impact of network latency,
strive to keep data sources, gateways, and your Power BI capacity as close as possible.
Preferably, they reside within the same region. If network latency is an issue, try locating
gateways and data sources closer to your Power BI capacity by placing them inside
cloud-hosted virtual machines.
Plotting spatial data and analytics in the map visualization requires SQL Server spatial
data. Therefore, it's not possible to work with the map visualization when SQL Server is
your data source. To be clear, it will work if your data source is Azure SQL Database
because Power BI doesn't connect via a gateway.
Data-related images
Images can be used to add logos or pictures to your report layout. When images relate
to the rows retrieved by a report dataset, you have two options:
It's possible that image data can also be retrieved from your data source (if already
stored in a table).
When the images are stored on a web server, you can use a dynamic expression to
create the image URL path.
For more information and suggestions, see Image guidance for paginated reports.
If you need to delete query fields from your dataset, we recommend you remove the
corresponding columns from the dataset query. Report Builder will automatically
remove any redundant query fields. If you do happen to delete query fields, be sure to
also modify the dataset query statement to remove the columns.
Unused datasets
When a report is run, all datasets are evaluated—even if they're not bound to report
objects. For this reason, be sure to remove any test or development datasets before you
publish a report.
Next steps
For more information related to this article, check out the following resources:
This article targets you as a report author designing Power BI paginated reports. It
provides suggestions when working with images. Commonly, images in report layouts
can display a graphic like a company logo, or pictures.
Suggestions
Consider the following suggestions to deliver professional report layouts, ease of
maintenance, and optimized report performance:
Use smallest possible size: We recommend you prepare images that are small in
size, yet still look sharp, and crisp. It's all about a balance between quality and size.
Consider using a graphics editor (like MS Paint) to reduce the image file size.
Avoid embedded images: First, embedded images can bloat the report file size,
which can contribute to slower report rendering. Second, embedded images can
quickly become a maintenance nightmare if you need to update many report
images (as might be the case should your company logo change).
Use web server storage: Storing images on a web server is a good option,
especially for the company logo, which may be sourced from the company
website. However, take care if your report users will access reports outside your
network. In this case, be sure that the images are available over the Internet and
do not require authentication or additional sign-in to access the image. Images
stored on a web server must not exceed 4 MB in size or they will not load in the
Power BI service.
When images relate to your data (like pictures of your salespeople), name image
files so a report expression can dynamically produce the image URL path. For
example, you could name the salespeople pictures using each salesperson's
employee number. Providing the report dataset retrieves the employee number,
you can write an expression to produce the full image URL path.
Use database storage: When a relational database stores image data, it makes
sense to source the image data directly from the database tables—especially when
the images are not too large.
Also, be sure to use watermark styled images. Generally, watermark styled images
have a transparent background (or have the same background color used by the
report). They also use faint colors. Common examples of watermark styled images
include the company logo, or sensitivity labels like "Draft" or "Confidential".
Next steps
For more information related to this article, check out the following resources:
This article targets you as a report author designing Power BI paginated reports. It
provides scenarios for designing cascading parameters. Cascading parameters are
report parameters with dependencies. When a report user selects a parameter value (or
values), it's used to set available values for another parameter.
7 Note
Design scenarios
There are two design scenarios for using cascading parameters. They can be effectively
used to:
Example database
The examples presented in this article are based on an Azure SQL Database. The
database records sales operations, and contains various tables storing resellers,
products, and sales orders.
A table named Reseller stores one record for each reseller, and it contains many
thousands of records. The Reseller table has these columns:
ResellerCode (integer)
ResellerName
Country-Region
State-Province
City
PostalCode
There's a table named Sales, too. It stores sales order records, and has a foreign key
relationship to the Reseller table, on the ResellerCode column.
Example requirement
There's a requirement to develop a Reseller Profile report. The report must be designed
to display information for a single reseller. It's not appropriate to have the report user
enter a reseller code, as they rarely memorize them.
SQL
SELECT DISTINCT
[Country-Region]
FROM
[Reseller]
ORDER BY
[Country-Region]
3. Create the StateProvince dataset that retrieves distinct state-province values for
the selected country-region, using the following query statement:
SQL
SELECT DISTINCT
[State-Province]
FROM
[Reseller]
WHERE
[Country-Region] = @CountryRegion
ORDER BY
[State-Province]
4. Create the City dataset that retrieves distinct city values for the selected country-
region and state-province, using the following query statement:
SQL
SELECT DISTINCT
[City]
FROM
[Reseller]
WHERE
[Country-Region] = @CountryRegion
AND [State-Province] = @StateProvince
ORDER BY
[City]
6. Create the Reseller dataset to retrieve all resellers for the selected geographic
values, using the following query statement:
SQL
SELECT
[ResellerCode],
[ResellerName]
FROM
[Reseller]
WHERE
[Country-Region] = @CountryRegion
AND [State-Province] = @StateProvince
AND [City] = @City
AND [PostalCode] = @PostalCode
ORDER BY
[ResellerName]
7. For each dataset except the first, map the query parameters to the corresponding
report parameters.
7 Note
All query parameters (prefixed with the @ symbol) shown in these examples could
be embedded within SELECT statements, or passed to stored procedures.
Generally, stored procedures are a better design approach. It's because their query
plans are cached for quicker execution, and they allow you develop more
sophisticated logic, when needed. However, they aren't currently supported for
gateway relational data sources, which means SQL Server, Oracle, and Teradata.
Lastly, you should always ensure suitable indexes exist to support efficient data
retrieval. Otherwise, your report parameters could be slow to populate, and the
database could become overburdened. For more information about SQL Server
indexing, see SQL Server Index Architecture and Design Guide.
2. Create the ReportGroup dataset to retrieve the first letters used by all resellers,
using the following query statement:
SQL
SELECT DISTINCT
LEFT([ResellerName], 1) AS [ReportGroup]
FROM
[Reseller]
ORDER BY
[ReportGroup]
3. Create the Reseller dataset to retrieve all resellers that commence with the
selected letter, using the following query statement:
SQL
SELECT
[ResellerCode],
[ResellerName]
FROM
[Reseller]
WHERE
LEFT([ResellerName], 1) = @ReportGroup
ORDER BY
[ResellerName]
4. Map the query parameter of the Reseller dataset to the corresponding report
parameter.
It's more efficient to add the grouping column to the Reseller table. When persisted and
indexed, it delivers the best result. For more information, see Specify Computed
Columns in a Table.
SQL
This technique can deliver even greater potential. Consider the following script that adds
a new grouping column to filter resellers by pre-defined bands of letters. It also creates
an index to efficiently retrieve the data required by the report parameters.
SQL
1. Create the Search and Reseller report parameters, ordered in the correct sequence.
2. Create the Reseller dataset to retrieve all resellers that contain the search text,
using the following query statement:
SQL
SELECT
[ResellerCode],
[ResellerName]
FROM
[Reseller]
WHERE
[ResellerName] LIKE '%' + @Search + '%'
ORDER BY
[ResellerName]
3. Map the query parameter of the Reseller dataset to the corresponding report
parameter.
Tip
You can improve upon this design to provide more control for your report users. It
lets them define their own pattern matching value. For example, the search value
"red%" will filter to resellers with names that commence with the characters "red".
Here's how you can let the report users define their own pattern.
SQL
WHERE
[ResellerName] LIKE @Search
Many non-database professionals, however, don't know about the percentage (%)
wildcard character. Instead, they're familiar with the asterisk (*) character. By modifying
the WHERE clause, you can let them use this character.
SQL
WHERE
[ResellerName] LIKE SUBSTITUTE(@Search, '%', '*')
In this example, the report user interacts with three report parameter. The first two set a
date range of sales order dates. The third parameter then lists resellers where orders
have been created during that time period.
Here's how you can develop the cascading parameters:
2. Create the Reseller dataset to retrieve all resellers that created orders in the date
period, using the following query statement:
SQL
SELECT DISTINCT
[r].[ResellerCode],
[r].[ResellerName]
FROM
[Reseller] AS [r]
INNER JOIN [Sales] AS [s]
ON [s].[ResellerCode] = [r].[ResellerCode]
WHERE
[s].[OrderDate] >= @OrderDateStart
AND [s].[OrderDate] < DATEADD(DAY, 1, @OrderDateEnd)
ORDER BY
[r].[ResellerName]
Recommendations
We recommend you design your reports with cascading parameters, whenever possible.
It's because they:
Next steps
For more information related to this article, check out the following resources:
This article targets you as a report author designing Power BI paginated reports. It
provides recommendations to help you avoid blank pages when your report is exported
to a hard-page format—like PDF or Microsoft Word—or, is printed.
Page setup
Report page size properties determine the page orientation, dimensions, and margins.
Access these report properties by:
Using the report Property Page: Right-click the dark gray area outside the report
canvas, and then select Report Properties.
Using the Properties pane: Click the dark gray area outside the report canvas to
select the report object. Ensure the Properties pane is open.
The Page Setup page of the report Property Page provides a friendly interface to view
and update the page setup properties.
Ensure all page size properties are correctly configured:
Property Recommendation
Paper size Select a paper size, or assign custom width and height values.
Margins Set appropriate values for the left, right, top, and bottom margins.
A common reason why blank pages are output, is the report body width exceeds the
available page space.
You can only view and set the report body width using the Properties pane. First, click
anywhere in an empty area of the report body.
Ensure the width value doesn't exceed available page width. Be guided by the following
formula:
Report body width <= Report page width - (Left margin + Right margin)
7 Note
It's not possible to reduce the report body width when there are report objects
already in the space you want to remove. You must first reposition or resize them
before reducing the width.
Also, the report body width can increase automatically when you add new objects,
or resize or reposition existing objects. The report designer always widens the body
to accommodate the position and size of its contained objects.
We recommend you always reduce the height of the body to remove any trailing space.
Page break options
Each data region and data visualization has page break options. You can access these
options in its property page, or in the Properties pane.
Ensure the Add a page break after property isn't unnecessarily enabled.
Next steps
For more information related to this article, check out the following resources:
This article targets Power BI Report Server and SQL Server Reporting Services (SSRS)
report authors and Power BI administrators. It provides you with guidance to help you
migrate your Report Definition Language (.rdl) reports to Power BI.
Flow diagram shows the path migrating on-premises .rdl reports to paginated reports the
Power BI service.
7 Note
Guidance is divided into four stages. We recommend that you first read the entire article
prior to migrating your reports.
You can achieve migration without downtime to your report servers, or disruption to
your report users. It's important to understand that you don't need to remove any data
or reports. So, it means you can keep your current environment in place until you're
ready for it to be retired.
Before you start
Before you start the migration, verify that your environment meets certain prerequisites.
We'll describe these prerequisites, and also introduce you to a helpful migration tool.
Supported versions
You can migrate report server instances running on premises, or on virtual machines
hosted by cloud providers like Azure.
The following list describes the SQL Server Reporting Services versions supported for
migration to Power BI:
You can migrate .rdl files from Power BI Report Server as well.
It doesn't modify or remove your existing reports. On completion, the tool outputs a
summary of all actions completed, successful or unsuccessful.
Over time, Microsoft may improve the tool. The community is encouraged to contribute
and help enhance it, too.
Pre-migration stage
After verifying that your organization meets the pre-requisites, you're ready to start the
Pre-migration stage. This stage has three phases:
1. Discover
2. Assess
3. Prepare
Discover
The goal of the Discover phase is to identify your existing report server instances. This
process involves scanning the network to identify all report server instances in your
organization.
You can use the Microsoft Assessment and Planning Toolkit . The "MAP Toolkit"
discovers and reports on your report server instances, versions, and installed features.
It's a powerful inventory, assessment, and reporting tool that can simplify your
migration planning process.
Organizations may have hundreds of SQL Server Reporting Services (SSRS) reports.
Some of those reports may become obsolete due to lack of use. The article Find and
retire unused reports can help you discover unused reports and how to create a cadence
for cleanup.
Assess
Having discovered your report server instances, the goal of the Assess phase is to
understand any .rdl reports—or server items—that can't be migrated.
Your .rdl reports can be migrated from your report servers to Power BI. Each migrated
.rdl report will become a Power BI paginated report.
The following report server item types, however, can't be migrated to Power BI:
Shared data sources and shared datasets: The RDL Migration Tool automatically
converts shared data sources and shared datasets into embedded data sources
and datasets, provided that they're using supported data sources.
Resources such as image files
Linked reports migrate, whether the parent report that links to them is selected for
migration or no. In the Power BI service, they're regular .rdl reports.
KPIs: Power BI Report Server, or Reporting Services 2016 or later—Enterprise
Edition only
Mobile reports: Power BI Report Server, or Reporting Services 2016 or later—
Enterprise Edition only
Report models: deprecated
Report parts: deprecated
If your .rdl reports rely on features not yet supported by Power BI paginated reports, you
can plan to redevelop them as Power BI reports, when it makes sense.
For more information about supported data sources for paginated reports in the Power
BI service, see Supported data sources for Power BI paginated reports.
Generally, Power BI paginated reports are optimized for printing, or PDF generation.
Power BI reports are optimized for exploration and interactivity. For more information,
see When to use paginated reports in Power BI.
Differences in PDF output occur most often when a font that doesn't support non-Latin
characters is used in a report and then non-Latin characters are added to the report. You
should test the PDF rendering output on both the report server and the client
computers to verify that the report renders correctly.
Prepare
The goal of the Prepare phase involves getting everything ready. It covers setting up the
Power BI environment, planning how you'll secure and publish your reports, and ideas
for redeveloping report server items that won't migrate.
1. Verify support for your report data sources, and set up a Power BI gateway to allow
connectivity with any on-premises data sources.
2. Become familiar with Power BI security, and plan how you'll reproduce your report
server folders and permissions with Power BI workspaces.
3. Become familiar with Power BI sharing, and plan how you'll distribute content by
publishing Power BI apps.
4. Consider using shared Power BI datasets in place of your report server shared data
sources.
5. Use Power BI Desktop to develop mobile-optimized reports, possibly using the
Power KPI custom visual in place of your report server mobile reports and KPIs.
6. Reevaluate the use of the UserID built-in field in your reports. If you rely on the
UserID to secure report data, then understand that for paginated reports (when
hosted in the Power BI service) it returns the User Principal Name (UPN). So,
instead of returning the NT account name, for example AW\adelev, the built-in
field returns something like [email protected]. You'll need to revise your
dataset definitions, and possibly the source data. Once revised and published, we
recommend you thoroughly test your reports to ensure data permissions work as
expected.
7. Reevaluate the use of the ExecutionTime built-in field in your reports. For
paginated reports (when hosted in the Power BI service), the built-in field returns
the date/time in Coordinated Universal Time (or UTC). It could impact on report
parameter default values, and report execution time labels (typically added to
report footers).
8. If your data source is SQL Server (on premises), verify that reports aren't using map
visualizations. The map visualization depends on SQL Server spatial data types, and
these aren't supported by the gateway. For more information, see Data retrieval
guidance for paginated reports (SQL Server complex data types).
9. For cascading parameters, be mindful that parameters are evaluated sequentially.
Try preaggregating report data first. For more information, see Use cascading
parameters in paginated reports.
10. Ensure your report authors have Power BI Report Builder installed, and that you
can easily distribute later releases throughout your organization.
11. Utilize capacity planning documentation for paginated reports.
Migration stage
After preparing your Power BI environment and reports, you're ready for the Migration
stage.
There are two migration options: manual and automated. Manual migration is suited to
a small number of reports, or reports requiring modification before migration.
Automated migration is suited to the migration of a large number of reports.
Manual migration
Anyone with permission to access to the report server instance and the Power BI
workspace can manually migrate reports to Power BI. Here are the steps to follow:
1. Open the report server portal that contains the reports you want to migrate.
2. Download each report definition, saving the .rdl files locally.
3. Open the latest version of Power BI Report Builder, and connect to the Power BI
service using your Azure AD credentials.
4. Open each report in Power BI Report Builder, and then:
a. Verify all data sources and datasets are embedded in the report definition, and
that they're supported data sources.
b. Preview the report to ensure it renders correctly.
c. Select Publish, then select Power BI service.
d. Select the workspace where you want to save the report.
e. Verify that the report saves. If certain features in your report design aren't yet
supported, the save action will fail. You'll be notified of the reasons. You'll then
need to revise your report design, and try saving again.
Automated migration
There are three options for automated migration. You can use:
For Power BI Report Server and SQL Server 2022, see Publish .rdl files to Power BI.
For previous versions of Reporting Services, use the RDL Migration Tool in
GitHub.
The publicly available APIs for Power BI Report Server, Reporting Services, and
Power BI
You can also use the publicly available Power BI Report Server, Reporting Services, and
Power BI APIs to automate the migration of your content. While the RDL Migration Tool
already uses these APIs, you can develop a custom tool suited to your exact
requirements.
1. Test the reports in each browser supported by Power BI to confirm the report
renders correctly.
2. Run tests to compare report rending times on the report server and in the Power
BI service. Check that Power BI reports render in an acceptable time.
3. For long-rendering reports, consider having Power BI deliver them to your report
users as email subscriptions with report attachments.
4. For Power BI reports based on Power BI datasets, review model designs to ensure
they're fully optimized.
Reconcile issues
The Post-migration phase is crucial for reconciling any issues, and that you address any
performance concerns. Adding the paginated reports workload to a capacity can
contribute to slow performance—for paginated reports and other content stored in the
capacity.
Next steps
For more information about this article, check out the following resources:
Publish .rdl files to Power BI from Power BI Report Server and SQL Server Reporting
Services
RDL Migration Tool for older versions of Reporting Services
Power BI Report Builder
Data retrieval guidance for paginated reports
When to use paginated reports in Power BI
Paginated reports in Power BI: FAQ
Online course: Paginated Reports in a Day
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Power BI partners are available to help your organization succeed with the migration
process. To engage a Power BI partner, visit the Power BI partner portal .
Publish .rdl files to Power BI from Power
BI Report Server and Reporting Services
Article • 04/12/2023
Do you have Report Definition Language (.rdl) paginated reports in Power BI Report
Server or SQL Server 2022 Reporting Services (SSRS) that you want to migrate to the
Power BI service? This article provides step-by-step instructions for migrating .rdl files
and Power BI reports (.pbix files) from Power BI Report Server and SQL Server 2022
Reporting Services to the Power BI service.
7 Note
If you're using a previous version of Reporting Services, continue to use the RDL
Migration Tool for now.
You can migrate reports without downtime to your report servers or disruption to your
report users. It's important to understand that you don't need to remove any data or
reports. You can keep your current environment in place until you're ready for it to be
retired.
Prerequisites
My Workspace
You can publish and share paginated reports to your My Workspace with a Power BI free
license.
Other workspaces
To publish to other workspaces, you need to meet these prerequisites:
Supported versions
You can migrate reports from SSRS instances running on-premises or on virtual
machines hosted by cloud providers like Azure.
This publishing tool is designed to help customers migrate SSRS paginated reports (.rdl
files) from their local servers to a Power BI workspace in their tenant. As part of the
migration process, the tool also:
You can only migrate .rdl reports from your SSRS servers to Power BI. Each migrated .rdl
report becomes a Power BI paginated report.
You can publish individual .rdl reports or the contents of entire folders from the SSRS
web portal to the Power BI service. Read on to learn how to publish .rdl reports to Power
BI.
Select Publish to find the .rdl files you want to publish from SSRS to the Power BI
service.
Select Publish all reports to select all the .rdl files in the current folder and
start the migration.
Select Select reports to publish to open a list view of all .rdl files in the current
folder. Select the reports and folders you want to migrate.
If you chose Select reports to publish, the next step is to Select reports to publish
to Power BI.
Step 2: Sign in/Sign up
SQL Server Reporting Services
After you've selected the reports you want to publish, it's time to Sign in to the
Power BI service.
Step 3: Choose a workspace
SQL Server Reporting Services
Now that you're signed in, select the dropdown arrow to find and Select a
workspace.
Site properties
If you'd like to disable the migration setting, you need to update your report server. For
more information on server properties, see the article Server Properties Advanced Page -
Power BI Report Server & Reporting Services:
EnablePowerBIReportMigrate
PowerBIMigrateCountLimit
PowerBIMigrateUrl
For sovereign clouds, you can update the Power BI endpoints by changing the site
settings in the web portal.
Next steps
More questions? Try asking the Reporting Services forum
Find and retire unused .rdl reports
Article • 03/08/2023
Your company may deal with hundreds of paginated reports (.rdl files) in Power BI
Report Server and SQL Server Reporting Services (SSRS). Some of those reports may
become obsolete and need to be retired. As a report author or administrator, you don't
want to migrate unused reports to the Power BI service . As you plan for a migration to
the cloud, we suggest doing some housekeeping to get rid of unused .rdl reports. This
best practice supports retention governance and allows your organization to make use
of a retention schedule and data policy.
There are two processes for checking unused reports. We extend the cleanup to unused
objects, as well as to get rid of unused database tables that could have potentially stale
data.
Once you've filled your audit log with tables and stored procedures used for reports,
you can export those objects to an Excel file and share them with stakeholders. Let them
know you're preparing to deprecate unused objects.
7 Note
Some important reports may run only rarely, so be sure to ask for feedback on
database objects that are infrequently used. By deprecating an object, you can alter
the object name by placing a zdel in front of it, so the object drops to the bottom
of the Object Explorer. This way, if you decide later that you need the zdel object,
you can alter the name back to the original. Once you know you're ready to remove
them from your database, you can create a cadence to delete unused objects.
Create a Reports Usage metrics list
Second, you'll want to create an .rdl Reports Usage metrics list by querying Report
Server DB. Use the T-SQL below to derive the usage counts. If your report server is
configured to store one year of report execution history, you can use a specific date to
filter the usage metrics.
Transact-SQL
; with UnusedReportsCte
AS
(
SELECT
Cat.Name,Path,COUNT(ExeLog.TimeStart) AS Cnt
7 Note
Subreports and linked reports don't appear in the execution log if the parent report
is executed.
From here you can decide whether to delete the unused reports right away or replace
the report with a message. You can let your users know the report is no longer being
used, so they can contact an administrator for support. Then you can develop a cadence
to delete them over time.
See also
Publish .rdl files to Power BI from Reporting Services
Migrate SQL Server Reporting Services reports to Power BI
Develop scalable multitenancy
applications with Power BI embedding
Article • 03/20/2023
This article describes how to develop a multitenancy application that embeds Power BI
content while achieving the highest levels of scalability, performance, and security. By
designing and implementing an application with service principal profiles, you can create
and manage a multitenancy solution comprising tens of thousands of customer tenants
that can deliver reports to audiences of over 100,000 users.
Service principal profiles is a feature that makes it easier for you to manage
organizational content in Power BI and use your capacities more efficiently. However,
using service principal profiles can add complexity to your application design. Therefore,
you should only use them when there's a need to achieve significant scale. We
recommend using service principal profiles when you have many workspaces and more
than 1,000 application users.
7 Note
The value of using service principal profiles increases as your need to scale
increases as well as your need to achieve the highest levels of security and tenant
isolation.
You can achieve Power BI embedding by using two different embedding scenarios:
Embed for your organization and Embed for your customer.
The Embed for your organization scenario applies when the application audience
comprises internal users. Internal users have organizational accounts and must
authenticate with Microsoft Azure Active Directory (Azure AD). In this scenario, Power BI
is software-as-a-service (SaaS). It's sometimes referred to as User owns data.
The Embed for your customer scenario applies when the application audience
comprises external users. The application is responsible for authenticating users. To
access Power BI content, the application relies on an embedding identity (Azure AD
service principal or master user account) to authenticate with Azure AD. In this scenario,
Power BI is platform-as-a-service (PaaS). It's sometimes referred to as App owns data.
7 Note
It's important to understand that the service principal profiles feature was designed
for use with the Embed for your customer scenario. That's because this scenario
offers ISVs and enterprise organizations the ability to embed with greater scale to a
large number of users and to a large number of customer tenants.
To create a scalable multitenancy solution, you must be able to automate the creation of
new customer tenants. Provisioning a new customer tenant typically involves writing
code that uses the Power BI REST API to create a new Power BI workspace, create
datasets by importing Power BI Desktop (.pbix) files, update data source parameters, set
data source credentials, and set up scheduled dataset refresh. The following diagram
shows how you can add Power BI items, such as reports and datasets, to workspaces to
set up customer tenants.
When you develop an application that uses the Embed for your customer scenario, it's
possible to make Power BI REST API calls by using an embedding identity that's either a
master user account or a service principal. We recommend using a service principal for
production applications. It provides the highest security and for this reason it's the
approach recommended by Azure AD. Also, it supports better automation and scale and
there's less management overhead. However, it requires Power BI admin rights to set up
and manage.
By using a service principal, you can avoid common problems associated with master
user accounts, such as authentication errors in environments where users are required to
sign in by using multifactor authentication (MFA). Using a service principal is also
consistent with the idea that the Embed for your customer scenario is based on
embedding Power BI content by using a PaaS mindset as opposed to a SaaS mindset.
1,000-workspace limitation
When you design a multitenancy environment that implements the Embed for your
customer scenario, be sure to consider that the embedding identity can't be granted
access to more than 1,000 workspaces. The Power BI service imposes this limitation to
ensure good performance when making REST API calls. The reason for this limitation is
related to how Power BI maintains security-related metadata for each identity.
Power BI uses metadata to track the workspaces and workspace items an identity can
access. In effect, Power BI must maintain a separate access control list (ACL) for each
identity in its authorization subsystem. When an identity makes a REST API call to access
a workspace, Power BI must do a security check against the identity's ACL to ensure it's
authorized. The time it takes to determine whether the target workspace is inside the
ACL increases exponentially as the number of workspaces increases.
7 Note
Power BI doesn't enforce the 1,000 workspace limitation through code. If you try,
you add an embedding identity to more than 1,000 workspaces, and REST API calls
will still execute successfully. However, your application will move into an
unsupported state, which may have implications should you try to request help
from Microsoft support.
Consider a scenario where two multi-tenant applications have each been set up to use a
single service principal. Now consider that the first application has created 990
workspaces while the second application has created 1,010 workspaces. From a support
perspective, the first application is within the supported boundaries while the second
application isn't.
Now compare these two applications purely from a performance perspective. There's
not that much difference because the ACLs for both service principals have let the
metadata for their ACLs grow to a point where it will degrade performance to some
degree.
Here's the key observation: The number of workspaces created by a service principal has
a direct impact on performance and scalability. A service principal that's a member of
100 workspaces will execute REST API calls faster than a service principal that's a
member of 1,000 workspaces. Likewise, a service principal that's a member of only 10
workspaces will execute REST API calls faster than a service principal that's a member of
100 workspaces.
) Important
Dataset ownership
Each Power BI dataset has a single owner, which can be either a user account or a
service principal. Dataset ownership is required to set up scheduled refresh and set
dataset parameters.
Tip
In the Power BI service, you can determine who the dataset owner is by opening
the dataset settings.
If necessary, you can transfer ownership of the dataset to another user account or
service principal. You can do that in the Power BI service, or by using the REST API
TakeOver operation. When you import a Power BI Desktop file to create a new dataset
by using a service principal, the service principal is automatically set as the dataset
owner.
Data source credentials
To connect a dataset to its underlying data source, the dataset owner must set data
source credentials. Data source credentials are encrypted and cached by Power BI. From
that point, Power BI uses those credentials to authenticate with the underlying data
source when refreshing the data (for import storage tables) or executing passthrough
queries (for DirectQuery storage tables).
We recommend that you apply a common design pattern when provisioning a new
customer tenant. You can execute a series of REST API calls by using the identity of the
service principal:
On completion of these REST API calls, the service principal will be an admin of the new
workspace and the owner of the dataset and data source credentials.
) Important
It's possible for a service principal to create data source credentials that are shared by
datasets in different workspaces across customer tenants, as shown in the following
diagram.
When data source credentials are shared by datasets that belong to different customer
tenants, the customer tenants aren't fully isolated.
There are strengths and weakness associated with each of these design strategies.
The single service principal design strategy requires a once-off creation of an Azure AD
app registration. Therefore, it involves less administrative overhead than the other two
design strategies because there's no requirement to create more Azure AD app
registrations. This strategy is also the most straightforward to set up because it doesn't
require writing extra code that switches the calling context between service principals
when making REST API calls. However, a problem with this design strategy is that it
doesn't scale. It only supports a multitenancy environment that can grow up to 1,000
workspaces. Also, performance is sure to degrade as the service principal is granted
access to a larger number of workspaces. There's also a problem with customer tenant
isolation because the single service principal becomes the owner of every dataset and all
data credentials across all customer tenants.
The service principal pooling design strategy is commonly used to avoid the 1,000-
workspace limitation. It allows the application to scale to any number of workspaces by
adding the correct number of service principals to the pool. For example, a pool of five
service principals makes it possible to scale up to 5,000 workspaces; a pool of 80 service
principals makes it possible to scale up to 80,000 workspaces, and so on. However, while
this strategy can scale to a large number of workspaces, it has several disadvantages.
First, it requires writing extra code and storing metadata to allow context switching
between service principals when making REST API calls. Second, it involves more
administrative effort because you must create Azure AD app registrations whenever you
need to increase the number of the service principals in the pool.
What's more, the service principal pooling strategy isn't optimized for performance
because it allows service principals to become members of hundreds of workspaces. It
also isn't ideal from the perspective of customer tenant isolation because the service
principals can become owners of dataset and data credentials shared across customer
tenants.
The one service principal per workspace design strategy involves creating a service
principal for each customer tenant. From a theoretical perspective, this strategy offers
the best solution because it optimizes the performance of REST API calls while providing
true isolation for datasets and data source credentials at the workspace level. However,
what works best in theory doesn't always work best in practice. That's because the
requirement to create a service principal for each customer tenant is impractical for
many organizations. That's because some organizations have formal approval processes
or they involve excessive bureaucracy to create Azure AD app registrations. These
reasons can make it impossible to grant a custom application the authority it needs to
create Azure AD app registrations on-demand and in the automated way that your
solution requires.
In less common scenarios where a custom application has been granted proper
permissions, it can use the Microsoft Graph API to create Azure AD app registrations on
demand. However, the custom application is often complex to develop and deploy
because it must somehow track authentication credentials for each Azure AD app
registration. It must also gain access to those credentials whenever it needs to
authenticate and acquire access tokens for individual service principals.
When you design a multitenancy application by using service principal profiles, you can
benefit from the strengths of the three design strategies (described in the previous
section) while avoiding their associated weaknesses.
Service principal profiles are local accounts that are created within the context of Power
BI. A service principal can use the Profiles REST API operation to create new service
principal profiles. A service principal can create and manage its own set of service
principal profiles for a custom application, as shown in the following diagram.
There's always a parent-child relationship between a service principal and the service
principal profiles it creates. You can't create a service principal profile as a stand-alone
entity. Instead, you create a service principal profile by using a specific service principal,
and that service principal serves as the profile's parent. Furthermore, a service principal
profile is never visible to user accounts or other service principals. A service principal
profile can only be seen and used by the service principal that created it.
The fact that service principal profiles aren't known to Azure AD has both advantages
and disadvantages. The primary advantage is that an Embed for your customer scenario
application doesn't need any special Azure AD permissions to create service principal
profiles. It also means that the application can create and manage a set of local
identities that are separate from Azure AD.
However, there are also disadvantages. Because service principal profiles aren't known
to Azure AD, you can't add a service principal profile to an Azure AD group to implicitly
grant it access to a workspace. Also, external data sources, such as an Azure SQL
Database or Azure Synapse Analytics, can't recognize service principal profiles as the
identity when connecting to a database. So, the one service principal per workspace
design strategy (creating a service principal for each customer tenant) might be a better
choice when there's a requirement to connect to these data sources by using a separate
service principal with unique authentication credentials for each customer tenant.
Tip
When you develop an Embed for your customer scenario application by using
service principal profiles, you only need to create a single Azure AD app
registration to provide your application with a single service principal. This
approach significantly lowers administrative overhead compared to other
multitenancy design strategies, where it's necessary to create additional Azure AD
app registrations on an ongoing basis after the application is deployed to
production.
It's important to understand that a service principal has an identity in Power BI that's
separate and distinct from the identities of its profiles. That provides you with choice as
a developer. You can execute REST API calls by using the identity of a service principal
profile. Alternatively, you can execute REST API calls without a profile, which uses the
identity of the parent service principal.
We recommend that you execute REST API calls as the parent service principal when
you're creating, viewing, or deleting service principal profiles. You should use the service
principal profile to execute all other REST API calls. These other calls can create
workspaces, import Power BI Desktop files, update dataset parameters, and set data
source credentials. They can also retrieve workspace item metadata and generate
embed tokens.
Consider an example where you need to set up a customer tenant for a customer
named Contoso. The first step makes a REST API call to create a service principal profile
with its display name set to Contoso. This call is made by using the identity of the service
principal. All remaining set up steps use the service principal profile to complete the
following tasks:
1. Create a workspace.
2. Associate the workspace with a capacity.
3. Import a Power BI Desktop file.
4. Set dataset parameters.
5. Set data source credentials.
6. Set up scheduled data refresh.
It's important to understand that access to the workspace and its content must be done
by using the identity of the service principal profile that was used to create the customer
tenant. It's also important to understand that the parent service principal doesn't need
access to the workspace or its content.
Tip
Remember: When making REST API calls, use the service principal to create and
manage service principal profiles, and use the service principal profile to create, set
up, and access Power BI content.
Create Profile
Delete Profile
Get Profile
Get Profiles
Update Profile
Use the Create Profile REST API operation to create a service principal profile. You must
set the displayName property in the request body to provide a display name for the new
tenant. The value must be unique across all the profiles owned by the service principal.
The call will fail if another profile with that display name already exists for the service
principal.
A successful call returns the id property, which is a GUID that represents the profile.
When you develop applications that use service principal profiles, we recommend that
you store profile display names and their ID values in a custom database. That way, it's
straightforward for your application to retrieve the IDs.
If you're programming with the Power BI .NET SDK , you can use the
Profiles.CreateProfile method, which returns a ServicePrincipalProfile object
representing the new profile. It makes it straightforward to determine the id property
value.
C#
pbiClient.Groups.AddGroupUser(workspaceId, groupUser);
In the Power BI service, in the workspace Access pane, you can determine which
identities, including security principals, have access.
If you're programming with the Power BI .NET SDK, you can use the
Profiles.DeleteProfile method.
If you're programming with the Power BI .NET SDK, you can use the Profiles.GetProfiles
method.
You must pass the access token for the parent service principal in the
Authorization header.
You must include a header named X-PowerBI-profile-id with the value of the
service principal profile's ID.
If you're using the Power BI .NET SDK, you can set the X-PowerBI-profile-id header
value explicitly by passing in the service principal profile's ID.
C#
As shown in the above example, once you add the X-PowerBI-profile-id header to the
PowerBIClient object, it's straightforward to invoke methods, such as Groups.GetGroups ,
C#
// Create the Power BI client
string profileId = "11111111-1111-1111-1111-111111111111";
As you program a multitenancy application, it's likely that you'll need to switch between
executing calls as the parent service principal and executing calls as a service principal
profile. A useful approach to manage context switching is to declare a class-level
variable that stores the PowerBIClient object. You can then create a helper method that
sets the variable with the correct object.
C#
if (profileId.Equals("")) {
pbiClient = GetPowerBIClient();
}
else {
pbiClient = GetPowerBIClientForProfile(new Guid(profileId));
}
}
When you need to create or manage a service principal profile, you can call the
SetCallingContext method without any parameters. This way, you can create and
C#
When you need to create and set up a workspace for a new customer tenant, you want
to execute that code as a service principal profile. Therefore, you should call the
SetCallingContext method by passing in the profile's ID. This way, you can create the
C#
// Create a workspace
GroupCreationRequest request = new GroupCreationRequest(workspaceName);
Group workspace = pbiClient.Groups.CreateGroup(request);
After you've used a specific service principal profile to create and configure a workspace,
you should continue to use that same profile to create and set up the workspace
content. There's no need to invoke the SetCallingContext method to complete the
setup.
Developer sample
We encourage you to download a sample application named
AppOwnsDataMultiTenant .
This sample application was developed by using .NET 6 and ASP.NET, and it
demonstrates how to apply the guidance and recommendations described in this article.
You can review the code to learn how to develop a multitenancy application that
implements the Embed for your customer scenario by using service principal profiles.
Next steps
For more information about this article, check out the following resources:
When it comes to localizing Power BI artifacts, such as datasets and reports, there are
three types of translations.
Metadata translation
Report label translation
Data translation
Metadata translation
Metadata translation provides localized values for dataset object properties. The object
types that support metadata translation include tables, columns, measures, hierarchies,
and hierarchy levels. Metadata translation rarely provides a complete solution by itself.
The following screenshot shows how metadata translations provide German names for
the measures displayed in Card visuals.
Metadata translation is also used to display column names and measure names in tables
and matrices.
Metadata translations are the easiest to create, manage, and integrate into a Power BI
report. By applying the features of Translations Builder to generate machine translations,
you can add the metadata translations you need to build and test a Power BI report.
Adding metadata translations to your dataset is an essential first step. For more
information, see Create multiple-language reports with Translations Builder.
A metadata translation represents the property for a dataset object that's been
translated for a specific language. If your dataset contains a table with an English name
of Products, you can add translations for the Caption property of this table object to
provide alternative names. These names appear when the report is rendered in a
different language.
In addition to the Caption property, which tracks an object's display name, dataset
objects also support adding metadata translations for two other properties, which are
Description and DisplayFolder.
When you begin designing a dataset that uses metadata translation, you can assume
that you always need translations for the Caption property. If you require support for
metadata translation for report authors who create and edit reports in the Power BI
service, you also need to provide metadata translations for the Description and
DisplayFolder properties.
Power BI reports and datasets that support metadata translation can only run in
workspaces that are associated with a dedicated capacity created using Power BI
Premium or the Power BI Embedded Service. Multiple-language reports don't load
correctly when launched from a workspace in the shared capacity. If you're working in a
Power BI workspace that doesn't display a diamond that indicates a Premium
workspace, multiple-language reports might not work as expected.
Power BI support for metadata translations only applies to datasets. Power BI Desktop
and the Power BI service don't support storing or loading translations for text values
stored as part of the report layout.
If you add a textbox or button to a Power BI report and then add a hard-coded text
value for a string displayed to the user, that text value is stored in the report layout. It
can't be localized. Avoid using hard-coded text values. Page tabs display names can't be
localized. You can design multiple-language reports so that page tabs are hidden and
never displayed to the user.
Report label translations are harder to create and manage than metadata translations
because Power BI provides no built-in feature to track or integrate them. Translations
Builder solves this problem using a Localized Labels table, which is a hidden table in the
dataset of a report. Add measures that track the required translations for each report
label.
Data translation
Data translation provides translated values for text-based columns in the underlying
data itself. Suppose a Power BI report displays product names imported from the rows
of the Products table in an underlying database. Data translation is used to display
product names differently for users who speak different languages. For example, some
users see products names in English while other users see product names in other
languages.
Data translations also appear in the axes of cartesian visuals and in legends.
Data translation is harder to design and implement than the other two types of
translation. You must redesign the underlying data source with extra text columns for
secondary language translations. Once the underlying data source has been extended
with extra text columns, you can then use a powerful feature in Power BI Desktop called
Field Parameters. This feature uses filters to control loading the data translations for a
specific language.
Next steps
Use locale values in multiple-language Power BI reports
Use locale values in multiple-language
Power BI reports
Article • 07/21/2023
Every report that loads in the Power BI service initializes with a user context that
identifies a language and a geographical region known as a locale. In most cases, a
locale identifies a country/region. The Power BI service tracks the combination of the
user's language and locale using a culture name.
7 Note
In some cases, a culture name also includes other information. For example, there
are two different culture names for the language Serbian in Serbia, which are sr-
Cyrl-RS and sr-Latn-RS . The part in the middle known as the script (Cyrl and Latn)
indicates whether to use the Cyrillic alphabet or the Latin alphabet. For more
information, see RFC 4646 .
For a list of culture name values, see ISO 639 Language codes and Online Browsing
Platform .
The following diagram shows a dataset that has a default language setting of en-US .
The dataset has been extended with metadata translations for three other culture
names: es-ES , fr-FR , and de-DE .
Every metadata translation is associated with a specific culture name. Cultures names act
as lookup keys that are used to add and retrieve metadata translations within the
context of a Power BI dataset.
You don't need to supply metadata translations for dataset's default language. Power BI
can just use the dataset object names directly for that culture name. One way to think
about this is that the dataset object names act as a virtual set of metadata translations
for the default language.
It's possible to explicitly add metadata translation for the default language. Use this
approach sparingly. Power BI Desktop doesn't support loading metadata translations in
its report designer. Instead, Power BI Desktop only loads dataset object names. If you
explicitly add metadata translations for the default language, Power BI reports look
different in Power BI Desktop than they do in the Power BI service.
When the Power BI service loads a report, it reads the culture name passed in the
Accept-Language header and uses it to initialize the language and locale of the report
loading context. On their devices, users can control which culture name is passed in the
Accept-Language header value by configuring regional settings.
When you open a Power BI report in the Power BI service, you can override the Accept-
Language header value by adding the language parameter at the end of the report URL
and setting its value to a valid culture name. For example, you can test loading a report
for a user in Canada who speaks French by setting the language parameter value to fr-
CA .
7 Note
Adding the language parameter to report URLs provides a convenient way to test
metadata translations in the Power BI service. This technique doesn't require you to
reconfigure any settings on your local machine or in your browser.
Measures currently act differently than tables and columns in Power BI. With measures,
the Power BI service attempts to find the closest match. For the culture name of fr-CA ,
the names of measures would load using the metadata translations for fr-FR .
With tables and columns, the Power BI service requires an exact match between the
culture name in the request and the supported metadata translations. If there isn't an
exact match, the Power BI service falls back to loading dataset object names. The names
of tables and columns in this scenario would load using English dataset object names.
7 Note
This use of the default language for the names of tables and columns is a known
issue for Power BI.
We recommend that you add metadata translation for any culture name you want to
support. In this example, add three sets of French translations to support the culture
names of fr-FR , fr-BE and fr-CA . The approach handles the scenario where the French
translations for users in France are different from French translations for users in
Canada.
Implement translations using measures and
USERCULTURE
Another feature in Power BI that helps with building multiple-language reports is the
Data Analysis Expressions (DAX) USERCULTURE function. When called inside a measure,
the USERCULTURE function returns the culture name of the current report loading context.
This approach makes it possible to write DAX logic in measures that implement
translations dynamically.
The example report displays the USERCULTURE return value in the upper right corner of
the report banner. You don't typically display a report element like this in a real
application.
This code is a simple example of writing a DAX expression for a measure that
implements dynamic translations. You can use a SWITCH statement that calls
USERCULTURE to form a basic pattern for implementing dynamic translations.
DAX
In a simple scenario, you build a report for an audience of report consumers that live in
both New York ( en-US ) and in London ( en-GB ). All users speak English ( en ), but some
live in different regions ( US and GB ) where dates and numbers are formatted differently.
For example, a user from New York wants to see dates in a mm/dd/yyyy format while a
user from London wants to see dates in a dd/mm/yyyy format.
Everything thing works as expected if you configure columns and measures using
format strings that support regional formatting. If you're formatting a date, we
recommend that you use a format string such as Short Date or Long Date because they
support regional formatting.
Here are a few examples of how a date value formatted with Short Date appears when
loaded under different locales.
Locale Format
en-US 12/31/2022
Locale Format
en-GB 31/12/2022
pt-PT 31-12-2022
de-DE 31.12.2022
ja-JP 2022/12/31
Next steps
Create multiple-language reports with Translations Builder
Use best practices to localize Power BI
reports
Article • 07/31/2023
When it comes to localizing software, there are some universal principles to keep in
mind. The first is to plan for localization from the start of any project. It's harder to add
localization support to an existing dataset or report that was initially built without any
regard for internationalization or localization.
This fact is especially true with Power BI reports because there are so many popular
design techniques that don't support localization. Much of the work for adding
localization support to existing Power BI reports involves undoing things that don't
support localization. Only after that work can you move forward with design techniques
that do support localization.
To use the shared dataset approach, create one .pbix project file with a dataset and an
empty report, which remains unused. After this dataset has been deployed to the Power
BI service, report builders can connect to it using Power BI Desktop to create report-only
.pbix files.
This approach makes it possible for the teams building reports to build .pbix project files
with report layouts that can be deployed and updated independently of the underlying
dataset. For more information, see Connect to datasets.
Adding a healthy degree of padding to localized labels is the norm when developing
internationalized software. It's essential that you test your reports with each language
you plan to support. You need to be sure that your report layouts look the way you
expect with any language you choose to support.
Next steps
Create multiple-language reports with Translations Builder
Create multiple-language reports with
Translations Builder
Article • 07/31/2023
Content creators can use Translations Builder to add multiple-language support to .pbix
project files in Power BI Desktop. The following screenshot shows what Translations
Builder looks like when working with a simple .pbix project that supports a few
secondary languages.
Translations Builder is an external tool developed for Power BI Desktop using C#, .NET 6,
and Windows Forms. Translations Builder uses an API called Tabular Object Model (TOM)
to update datasets that are loaded into memory and run in a session of Power BI
Desktop.
Translations Builder does most of its work by adding and updating the metadata
translations associated with datasets objects including tables, columns, and measures.
There are also cases in which Translations Builder creates new tables in a dataset to
implement strategies to handle aspects of building multiple-language reports.
When you open a .pbix project in Power BI Desktop, the dataset defined inside the .pbix
file is loaded into memory in a local session of the Analysis Services engine. Translations
Builder uses TOM to establish a direct connection to the dataset of the current .pbix
project.
On the same computer where you run Power BI Desktop, download and install
Translations Builder by using the Translations Builder Installation Guide .
After you install Translations Builder, you can open it directly from Power BI Desktop in
the External Tools ribbon. The Translations Builder project uses external tools
integration support. For more information, see External tools in Power BI Desktop.
When you launch an external tool like Translations Builder, Power BI Desktop passes
startup parameters to the application, including a connection string. Translations Builder
uses the connection string to establish a connection back to the dataset that's loaded in
Power BI Desktop.
This approach allows Translations Builder to display dataset information and to provide
commands to automate adding metadata translations. For more information, see
Translations Builder Developers Guide .
Translations Builder allows a content creator to view, add, and update metadata
translations using a two-dimensional grid. This translations grid simplifies the user
experience because it abstracts away the low-level details of reading and writing
metadata translation associated with a dataset definition. You work with metadata
translations in the translation grid similar to working with data in an Excel spreadsheet.
Next steps
Add a language to a report in Translations Builder
Add a language to a report in
Translations Builder
Article • 07/31/2023
When you open a .pbix project in Translations Builder for the first time, the translation
grid displays a row for each unhidden table, measure, and column in the project's
underlying data model. The translation grid doesn't display rows for dataset objects in
the data model that are hidden from the report view. Hidden objects aren't displayed on
a report and don't require translations.
The following screenshot shows the starting point for a simple data model before it's
been modified to support secondary languages.
7 Note
If you examine the translation grid for this .pbix project, you can see the first three
columns contain read-only columns used to identity each metadata translation. Each
metadata translation has an Object Type, a Property, and a Name. Translations for the
Caption property are always used. You can add translations for the Description and
DisplayFolder properties if necessary.
The fourth column in the translation grid always displays the translations for the
dataset's default language and locale, which in this case is English [en-US].
7 Note
Translations Builder makes it possible to update the translations for the default
language. Use this technique sparingly. It can be confusing because translations for
the default language don't load in Power BI Desktop.
Add languages
Translations Builder provides an Add Language option to add secondary languages to
the project's data model.
Translations Builder doesn't add metadata translations for a specific language. Instead, it
adds metadata translations for a culture name that identifies both a language and a
locale. For more information, see Use locale values in multiple-language Power BI
reports.
Translations Builder abstracts away the differences between a language and a culture
name to simplify the user experience. Content creators can think in terms of languages
instead of culture names.
) Important
Translations Builder can modify the dataset loaded in memory, but it can't
save the in-memory changes back to the underlying .pbix file. Always return
to Power BI Desktop and select the Save command after you add languages
or create or update translations.
Adding a new language adds a new column of editable cells to the translations grid.
If content creators speak all the languages involved, they can add and update
translations for secondary languages directly in the translation grid with an Excel-like
editing experience.
3. In the Publish to Power BI dialog box, highlight a workspace and then choose
Select.
4. When the publishing finishes, select the link to open the project in the Power BI
service.
After the report loads with its default language, select the browser address bar and add
the following language parameter to the report URL.
HTTP
?language=es-ES
When you add the language parameter to the end of the report URL, assign a value that
is a valid culture name. After you add the language parameter and press Enter, you can
verify that the parameter has been accepted by the browser as it reloads the report.
If you forget to add the question mark (?) or if you don't format the language
parameter correctly, the browser rejects the parameter and removes it from the URL.
After you correctly load a report using a language parameter value of es-ES, you should
see the user experience for the entire Power BI service UI change from English to
Spanish.
The report also displays the Spanish translations for the names of columns and
measures.
As you begin to work with secondary languages and translations to localize a .pbix
project, follow the same set of steps:
JavaScript
let config = {
type: "report",
id: reportId,
embedUrl: embedUrl,
accessToken: embedToken,
tokenType: models.TokenType.Embed,
localeSettings: { language: "de-DE" }
};
Next steps
Add a Localized Labels table to a Power BI report
Add a Localized Labels table to a Power
BI report
Article • 07/31/2023
Report label translations provide localized values for text elements on a report that
aren't directly associated with a dataset object. Examples of report labels are the text
values for report titles, section headings, and button captions. Power BI provides no
built-in features to track or integrate report labels. Translations Builder uses Localized
Labels tables to support this approach.
7 Note
Any hard-coded text value that you add to the report layout can't be localized. Suppose
you add a column chart to your report. By default, a Cartesian visual such as a column
chart is assigned a dynamic value to its Title property. That value is based on the names
of the columns and measures that have been added into the data roles, such as Axis,
Legend, and Values.
The default Title property for a Cartesian visual is dynamically parsed together in a
fashion that supports localization. As long as you supply metadata translations for the
names of columns and measures in the underlying dataset definition, the Title property
of the visual uses the translations. So, if you translate Sales Revenue, Day, and Year, the
visual creates a localized title.
The following table shows how the default Title property of this visual is updated for
each of these five languages.
You might not like the dynamically generated visual Title, but don't replace it with hard-
coded text. Any hard-coded text for the Title property can't be localized. Either leave the
visual Title property with its default value or use the Localized Labels table strategy to
create report labels that support localization.
After a measure has been created for each report label, Power BI can store and manage
its translations in the same fashion that it does for metadata translations. In fact, the
Localized Labels table strategy uses metadata translations to implement report label
translations.
Translations Builder creates the Localized Labels table and adds a measure each time
you need a report label. The Localized Labels table is created as a hidden table. You can
do all the work to create and manage report labels inside the Translation Builder user
experience. There's no need to inspect or modify the Localized Labels table using the
Power BI Desktop Model or Data views.
Here's an example of the Localized Labels table from the example project. It provides
localized report labels for the report title, visual titles, and captions for navigation
buttons used throughout the report.
1. From the Generate Translated Tables menu, select Create Localized Labels Table.
2. An informational dialog box asks if you want more information about the
Localized Labels table strategy. Select Yes to review documentation or No to
proceed.
After you create the Localized Labels table, there are three sample report labels as
shown in the following screenshot. In most cases, you want to delete these sample
report labels and replace them with the actual report labels required on the current
project.
There's no need to interact with the Localized Labels table in Power BI Desktop. You can
add and manage all the report labels you need in Translations Builder.
1. From the Generate Translated Tables menu, select Add Labels to the Localized
Labels Table. You can also run the command using the shortcut key Ctrl+A.
2. Add report labels one at a time by typing the text for the label. Then select Add
Label.
In the Localized Labels table strategy, you can create, manage, and store report labels in
the same .pbix project file that holds the metadata translations for the names of tables,
columns, and measures. The Localized Labels table strategy can merge metadata
translations and report label translations together in a unified experience in the
translation grid. There's no need to distinguish between metadata translations and
report label translations when it comes to editing translations or when using
Translations Builder features to generate machine translations.
There are other popular localization techniques that track report label translations in a
separate CSV file. While these techniques work, they aren't as streamlined. Report label
translations must be created separately and managed differently from the metadata
translations in a .pbix project. This strategy allows for report label translations and
metadata translations to be stored together and managed in the same way.
The first time you generate the Translated Localized Labels table, Translations Builder
creates the table and populates it with measures. After that, generating the table deletes
all the measures in the Translated Localized Labels table and recreates them. This action
synchronizes all the report label translations between the Localized Labels table and the
Translated Localized Labels table.
Unlike the Localized Labels table, the Translated Localized Labels table isn't hidden in
the Report View. The table provides measures that are intended to be used as report
labels in a report. Here's how the Translated Localized Labels table appears to a report
author in the Data pane when the report is in the Report View in Power BI Desktop.
Every measure in the Translated Localized Labels table has a name that ends with the
word Label. The reason for this fact is that two measures in the same dataset can't have
the same name. Measure names must be unique on a project-wide basis. It's not
possible to create measures in the Translated Localized Labels table that have the same
name as the measures in the Localized Labels table.
If you examine the machine-generated Data Analysis Expressions (DAX) expressions for
measures in the Translated Localized Labels table, they're based on the same DAX
pattern shown in Implement translations using measures and USERCULTURE. This
pattern uses the DAX function USERCULTURE together with the SWITCH function to return
the best translation for the current user. This DAX pattern defaults to the dataset's
default language if no match is found.
You must run Generate Translated Localized Labels Table anytime you make changes to
the Localized Labels table.
Don't edit the DAX expressions for measures in the Translated Localized Labels table.
Any edits you make are lost because all the measures in this table are deleted and
recreated each time you generate the table.
The example multiple-language project uses a Rectangle shape to display the localized
report label for the report title. The following screenshot shows how to select a
Rectangle shape and navigate to configure its Text property value in the Shape > Text
section in the Format pane.
The Text property of a shape can be configured with a hard-coded string. You must
avoid hard-coding text values into the report layout when you create multiple-language
reports. To use a localized measure, follow these steps.
2. Under Format, select Shape > Text. In the Text pane, select the fx button.
Power BI Desktop displays a dialog box that allows you to configure the Text
property of the Rectangle shape.
3. In the Text - Style - Text dialog box, expand the Translated Localized Labels table
and select any measure.
4. Select OK.
You can use the same technique to localize a visual Title using a measure from the
Translated Localized Labels table.
Next steps
Generate machine translations using Azure Translator Service
Generate machine translations using
Azure Translator Service
Article • 07/31/2023
While human translators are typically an essential part of the end-to-end process, it can
take a long time to send out translation files to a translation team and then to wait for
them to come back. With all the recent industry advances in AI, you can also generate
machine translations using a Web API that can be called directly from an external tool
such as Translations Builder. If you generate machine translations, you have something
to work with while waiting for a translation team to return their high-quality human
translations.
While machine translations aren't always guaranteed to be high quality, they do provide
value in the multiple-language report development process.
They can act as translation placeholders so you can begin your testing by loading
reports using secondary languages to see if there are layout issues or unexpected
line breaks.
They can provide human translators with a better starting point because they just
need to review and correct translations instead of creating every translation from
scratch.
They can be used to quickly add support for languages where there are legal
compliance issues and organizations are facing fines or litigation for
noncompliance.
7 Note
Translations Builder provides a Configuration Options dialog box where you can
configure the key and location to access the Azure Translator Service.
After you configure an Azure Translator Service Key, Translations Builder displays other
command buttons. These buttons generate translations for a single language at a time
or for all languages at once. There are also commands to generate machine translations
only for the translations that are currently empty.
Next steps
Add support for multiple-language page navigation
Add support for multiple-language
page navigation
Article • 07/31/2023
You can't display page tabs to the user in a multiple-language report because page tabs
in a Power BI report don't support localization. For localization, you must provide some
other means for users to navigate from page to page.
You can use a design technique where you add a navigation menu that uses buttons.
When the user selects a button, the button applies a bookmark to navigate to another
page. This section describes the process of building a navigation menu that supports
localization using measures from the Localized Labels table.
This article uses the multiple-language demo project and Power BI Desktop. You don't
need a Power BI license to start developing in Power BI Desktop. If you don't already
have Power BI Desktop installed, see Get Power BI Desktop.
Hide tabs
When you hide all but one of the tabs in your report, none of the tabs appear in the
published report. The report opens to the page of the unhidden tab. Even that tab isn't
displayed.
2. For each tab that you hide, right-click and select Hide Page from the context
menu.
Create bookmarks
Each button uses a bookmark to take the reader to a page. For more information on
bookmarks, see Create page and bookmark navigators.
1. From the View ribbon, select Bookmarks to display the Bookmarks pane.
a. Select a tab, starting with Sales Summary, which serves as the landing page.
c. Right-click the new bookmark and select Rename. Enter a bookmark name, such
as GoToSalesSummary.
d. Right-click the bookmark name and disable Data and Display. Enable Current
Page behavior.
e. Repeat these steps for each of the hidden tabs. The Bookmark pane has the
following bookmarks:
Configure buttons
The multiple-language demo project contains buttons for navigation. To learn more
about adding buttons, see Create buttons in Power BI reports.
1. Select a button at the top of the report, starting with Sales Summary.
3. Under Action, set Type to Bookmark and Bookmark to the relevant bookmark,
starting with GoToSalesSummary.
4. In the same way, configure each button in the navigation menu to apply a
bookmark to navigate to a specific page.
5. For each button, select Button > Style > Text and then select the function button.
6. In the Text - State dialog box, from the Translated Localized Labels table, select
the entry that corresponds to that button. For instance, Time Slices Label for Time
Slices.
The report now has no visible tabs when you publish it to the Power BI service. The
report opens to the Sales Summary page. Readers can move from page to page by
using the buttons, which are localized by using the Translated Localized Labels table.
Next steps
Guidance for Power BI
Enable workflows for human translation
in Power BI reports
Article • 08/04/2023
When you create multiple-language reports for Power BI, you can work quickly and
efficiently by using Translations Builder and by generating machine translations.
However, machine generated translations alone are inadequate for many production
needs. You need to integrate other people acting as translators into a human workflow
process.
Translations Builder uses a translation sheet, which is a .csv file that you export to send
to a translator. The human acting as a translator updates the translation sheet and then
returns it to you. You then import the revised sheet to integrate the changes into the
current dataset.
Translations Builder generates a file for a selected language using a special naming
format, for example, PbixProjectName-Translations-Spanish.csv. The name includes the
dataset name and the language for translation. Translations Builder saves the generated
translation sheet to a folder known as the Outbox.
Human translators can make edits to a translation sheet using Microsoft Excel. When
you receive an updated translation sheet from a translator, copy it to the Inbox folder.
From there, you can import it to integrate those updated translations back into the
dataset for the current project.
1. From the Dataset Connection menu, select Configure Settings to display the
Configuration Options dialog box.
2. Select the set buttons to update the settings for Translations Outbox Folder Path
and Translations Outbox Folder Path.
Next steps
Export translation sheets in Translations Builder
Export translation sheets in Translations
Builder
Article • 08/04/2023
When you use Translations Builder with external translators, you need to export a
translation sheet that contains the default language and empty cells or machine
generated translations. Translators update the .csv file and return it to you.
2. Select Export Translations Sheet to generate a translation sheet for that language.
You can select Open Export in Excel to view the exported file immediately.
The result of the export operation is a .csv file in the Outbox directory. If you selected
Open Export in Excel, you also see the result in Excel.
Tip
Don't select Open Export in Excel. That option opens all of the files in Excel.
Translations Builder generates a .csv file for the full translation sheet named
PbixProjectName-Translations-Master.csv. When you open the translations sheet in Excel,
you can see all secondary language columns and all translations. You can think of this
translation sheet as a backup of all translations on a project-wide basis.
Next steps
Import translation sheets in Translations Builder
Import translation sheets in Translations
Builder
Article • 08/04/2023
When you use Translations Builder with external translators, you need to export a
translation sheet that contains the default language and empty cells or machine
generated translations. Translators update the .csv file and return it to you.
Suppose you generate a translation sheet to send to a translator. When opened in Excel,
this translation sheet looks like the following screenshot.
The job of the translator is to review all translations in the relevant column and to make
updates where appropriate. From the perspective of the translator, the top row with
column headers and the first four columns should be treated as read-only values.
Here's a simple example. After you generate the current master translation sheet for a
project, imagine you delete French as a language from the project by right-clicking the
French [fr-FR] column header and selecting Delete This Language From Data Model.
When you attempt to delete the column for a language, Translations Builder prompts
you to verify.
You can select OK to continue. After you confirm the delete operation, the column for
French has been removed from the translations grid. Behind the scenes, Translations
Builder has also deleted all the French translations from the project.
Suppose that you deleted this language by mistake. You can use a previously generated
a master translation sheet to restore the deleted language. If you import the translation
sheet, the French [fr-FR] column reappears as the last column.
Next steps
Manage dataset translations at the enterprise level
Manage dataset translations at the
enterprise level
Article • 08/04/2023
You can use a master translations sheet as a project backup. Translations Builder adds a
secondary language along with its translations to a .pbix project if it's found in the
translation sheet but not in the target project. For more information, see Import
translation sheets in Translations Builder.
You can also create an enterprise-level master translation sheet to import when you
create new .pbix projects.
Import translations
Imagine you have two .pbix projects that have a similar data model in terms of the
tables, columns, and measures. In the first project, you have already added metadata
translations for all the unhidden dataset objects. In the second project, you haven't yet
started to add secondary languages or translations. You can export the master
translation sheet from the first project and import it into the second project.
The Import Translations command starts by determining whether there are any
secondary languages in the translation sheet that aren't in the target .pbix project. It
adds any secondary languages not already present in the target project. After that, the
command moves down the translation sheet row by row.
For each row, Translations Builder determines whether a dataset object in the .csv file
matches a dataset object of the same name in the .pbix project. When it finds a match,
the command copies all the translations for that dataset object into the .pbix project. If it
finds no match, the command ignores that row and continues to the next row.
The Import Translations command provides special treatment for report labels that
have been added to the Localized Labels table. If you import a translation sheet with
one or more localized report labels into a new .pbix project, the command creates the
Localized Labels table.
Because the Import Translations command creates the Localized Labels table and
copies report labels into a target .pbix project, it can be the foundation for maintaining
an enterprise-level master translation sheet. Use a set of localized report labels across
multiple .pbix projects. Each time you create a new .pbix project, you can import the
enterprise-level translation sheet to add the generalized set of localized report labels.
Next steps
Guidance for Power BI
Implement a data translation strategy
Article • 08/09/2023
All multiple-language reports require metadata translation and report label translation,
but not necessarily data translation. To determine whether your project requires data
translation, think through the use cases that you plan to support. Using data translation
involves planning and effort. You might decide not to support data translation unless it's
a hard requirement for your project.
Each customer deployment uses a single language for its database and all its users. Both
metadata translations and report label translations must be implemented in this use
case. You deploy a single version of the .pbix file across all customer deployments.
However, there's no need to implement data translations when no database instance
ever needs to be viewed in multiple languages.
A different use case introduces the requirement of data translations. The example .pbix
project file uses a single database instance that contains sales performance data across
several European countries/regions. This solution must display its reports in different
languages with data from a single database instance.
If you have people that use different languages and locales to interact with the same
database instance, you still need to address other considerations.
Examine the text-based columns that are candidates for translation. Determine
how hard translating those text values is. Columns with short text values, like
product names and product categories, are good candidates for data translations.
Columns that hold longer text values, such as product descriptions, require more
effort to generate high quality translations.
Consider the number of distinct values that require translation. You can easily
translate product names in a database that holds 100 products. You can probably
translate product names when the number gets up to 1000. What happens if the
number of translated values reaches 10,000 or 100,000? If you can't rely on
machine-generate translations, your translation team might have trouble scaling
up to handle that volume of human translations.
Consider whether there's on-going maintenance. Every time someone adds a new
record to the underlying database, there's the potential to introduce new text
values that require translation. This consideration doesn't apply to metadata
translation or report label translation. In those situations, you create a finite
number of translations and then your work is done. Metadata translation and
report label translation don't require on-going maintenance as long as the
underlying dataset schema and the report layout remain the same.
There are several factors that go into deciding whether to use data translation. You must
decide whether it's worth the time and effort required to implement data translation
properly. You might decide that implementing metadata translations and report label
translations goes far enough. If your primary goal is to make your reporting solution
compliant with laws or regulations, you might also find that implementing data
translations isn't a requirement.
Next steps
Extend the data source schema to support data translations
Extend the data source schema to
support data translations
Article • 08/09/2023
There are multiple ways to implement data translations in Power BI. Some data
translation strategies are better than others. Whatever approach you choose, make sure
that it scales in terms of performance. You should also ensure your strategy scales in
terms of the overhead required to add support for new secondary languages as part of
the on-going maintenance.
The current series of articles describes a strategy for implementing data translations
made possible by the Power BI Desktop feature called field parameters.
The design approach shown here uses a three-part naming convention for table column
names used to hold data translations. A name consists of the following parts:
For example, the column that contains product names translated into Spanish is
ProductTranslationSpanish. Using this three-part naming convention isn't required for
implementing data translation, but Translations Builder gives these columns special
treatment.
Understand field parameters
A field parameter is a table in which each row represents a field and where each these
fields must be defined as either a column or a measure. In one sense, a field parameter
is just a predefined set of fields. Given that rows in a table represent these fields, the set
of fields of a field parameter supports filtering. You can think of a field parameter as a
filterable set of fields.
When you create a field parameter, you can populate the fields collection using either
measures or columns.
When you use field parameters to implement data translations, use columns instead of
measures. The primary role that field parameters play in implementing data translations
is providing a single, unified field use in report authoring that can be dynamically
switched between source columns.
Next steps
Implement data translation using field parameters
Implement data translation using field
parameters
Article • 08/09/2023
This article shows you how to implement data translation by using a field parameter.
The process has the following steps:
3. Populate the fields connection of this field parameter with the columns from the
Products table with the translated product names.
4. Be sure that Add slicer to this page is enabled.
5. Select Create.
After you create a field parameter, it appears in the Fields list on the right as a new
table. Under Data, select Translated Product Names to see the Data Analysis
Expressions (DAX) code that defines the field parameter, as shown in the following
screenshot.
You can see the table type under Visualizations and Translated Product Names as
the Columns value. Position both the slicer and the data table anywhere on the
canvas.
2. Select one item in the slicer, such as ProductTranslationSpanish. The table now
shows a single corresponding column.
If you examine the DAX expression, the hard-coded column names from the underlying
data source appear, such as ProductTranslationEnglish and ProductTranslationSpanish.
DAX
Update the DAX expression to replace the column names with localized translations for
the word Product as shown in the following code.
DAX
When you make this change, the column header is translated along with product names.
The names of the columns in a field parameter are generated based on the name you
give to the top-level field parameter. You should rename the columns to simplify the
data model and to improve readability.
2. Rename the two hidden fields with shorter names, such as Fields and Sort Order.
Add a language ID column
The field parameter is a table with three columns named Product, Fields, and Sort
Order. The next step is to add a fourth column with a language identifier to enable
filtering by language. You can add the column by modifying the DAX expression for the
field parameter.
1. Add a fourth string parameter to the row for each language with the lower-case
two character language identifier.
DAX
After you update the DAX expression with a language identifier for each language,
a new column appears in the Data view of the Products table named Value4.
3. Select LanguageId to highlight it. From the control ribbon, select Sort by column
> Sort Order.
You don't need to configure the sort column for the two pre-existing fields. Power
BI Desktop configured them when you set up the field parameter.
4. Open the Model view and, next to LanguageId select More options (three dots).
Select Hide in report view.
Report authors never need to see this column because it's used to select a language by
filtering behind the scenes.
In this article, you created a field parameter named Translated Product Names and
extended it with a column named LanguageId. The LanguageId column is used to filter
which source column is used. That action determines which language is displayed to
report consumers.
Next steps
Add the languages table to filter field parameters
Add the languages table to filter field
parameters
Article • 08/09/2023
As a content creator working with Power BI Desktop, there are many different ways to
add a new table to a data model. In this article, you use Power Query to create a table
named Languages.
2. Under Queries, right-click and select New Query > Blank Query from the context
menu.
3. Select the new query. Under Query Settings > Properties > Name, enter
Languages as the name of the query.
5. Copy the following M code into the editor, then select Done.
Power Query M
let
LanguagesTable = #table(type table [
Language = text,
LanguageId = text,
DefaultCulture = text,
SortOrder = number
], {
{"English", "en", "en-US", 1 },
{"Spanish", "es", "es-ES", 2 },
{"French", "fr", "fr-FR", 3 },
{"German", "de", "de-DE", 4 }
}),
SortedRows = Table.Sort(LanguagesTable,{{"SortOrder",
Order.Ascending}}),
QueryOutput = Table.TransformColumnTypes(SortedRows,{{"SortOrder",
Int64.Type}})
in
QueryOutput
When this query runs, it generates the Languages table with a row for each of the
four supported languages.
Create a relationship
Next, create a relationship between the Languages table and the Translated Product
Names table created in Implement data translation using field parameters.
2. Find the Languages table and the Translated Product Names table.
3. Drag the LanguageId column from one table to the LanguageId entry in the other
table.
After you establish the relationship between Languages and Translated Product Names,
it serves as the foundation for filtering the field parameter on a report-wide basis. For
example, you can open the Filter pane and add the Language column from the
Languages table to the Filters on all pages section. If you configure this filter with the
Require single selection option, you can switch between languages using the Filter
pane.
Next steps
Synchronize multiple field parameters
Synchronize multiple field parameters
Article • 08/09/2023
2. In the Parameters dialog box, enter the name Translated Category Names.
3. Populate the fields with the columns from the Products table for the desired
languages.
4. Select Create.
5. Open the Data view. Select the table to view the Data Analysis Expressions (DAX)
code. Update the code to match the following code.
DAX
After you make your changes, the Category value is localized and there's a new
column.
6. Double-click Value4 and change the name to LanguageId.
2. Locate the Translated Category Names table and the Languages table.
You have now learned how to synchronize the selection of language across multiple
field parameters. This example involves two field parameters. If your project involves a
greater number of columns requiring data translations such as 10, 20 or even 50, you
can repeat this approach and scale up as much as you need.
7 Note
Next steps
Implement data translations for a calendar table
Implement data translations for a
calendar table
Article • 08/09/2023
If you're implementing data translations, you can add translation support for text-based
columns in calendar tables. These tables include translations for the names of months or
the days of the week. This approach allows you to create visuals that mention days or
months.
Translated versions make the visual easy to read in your supported languages.
The strategy in this article for calendar table column translations uses Power Query and
the M query language. Power Query provides built-in functions, such as Date.MonthName ,
which accept a Date parameter and return a text-based calendar name. If your .pbix
project has en-US as its default language and locale, the following Power Query function
call evaluates to a text-based value of January.
Power Query M
Date.MonthName( #date(2023, 1, 1) )
If you want to translate the month name into French, you can pass a text value of fr-FR.
Power Query M
Power Query is built on a functional query language named M. With that language, you
can iterate through the rows of the Languages table to discover what languages and
what default cultures the project supports. You can write a query that uses the
Languages table as its source to generate a calendar translation table with the names of
months or weekdays.
Here's the M code that generates the Translated Month Names Table.
Power Query M
let
Source = #table( type table [ MonthNumber = Int64.Type ],
List.Split({1..12},1)),
Translations = Table.AddColumn( Source, "Translations",
each
[ MonthDate = #date( 2022, [ MonthNumber ], 1 ),
Translations = List.Transform(Languages[DefaultCulture], each
Date.MonthName( MonthDate, _ ) ),
TranslationTable = Table.FromList( Translations, null ),
TranslationsTranspose = Table.Transpose(TranslationTable),
TranslationsColumns = Table.RenameColumns(
TranslationsTranspose,
List.Zip({ Table.ColumnNames( TranslationsTranspose ),
List.Transform(Languages[Language],
each "MonthNameTranslations" & _ ) })
)
]
),
ExpandedTranslations = Table.ExpandRecordColumn(Translations,
"Translations",
{ "TranslationsColumns" },
{ "TranslationsColumns"
}),
ColumnsCollection = List.Transform(Languages[Language], each
"MonthNameTranslations" & _ ),
ExpandedTranslationsColumns =
Table.ExpandTableColumn(ExpandedTranslations,
"TranslationsColumns",
ColumnsCollection,
ColumnsCollection ),
TypedColumnsCollection = List.Transform(ColumnsCollection, each {_, type
text}),
QueryOutput = Table.TransformColumnTypes(ExpandedTranslationsColumns,
TypedColumnsCollection)
in
QueryOutput
Tip
You can simply copy and paste the M code from the
ProductSalesMultiLanguage.pbix sample whenever you need to add calendar
translation tables to your project.
If the Languages table contains four rows for English, Spanish, French, and German, the
Translated Month Names Table query generates a table with four translation columns
as shown in the following screenshot.
Likewise, the query named Translated Day Names Table generates a table with weekday
name translations.
The two queries named Translated Month Names Table and Translated Day Names
Table have been written to be generic. They don't contains any hard-coded column
names. These queries don't require any modifications in the future when you add or
remove languages from the project. All you need to do is to update the data rows in the
query.
Configure the translation columns in Translated Month Names Table to use the
sort column MonthNumber
Configure the translations columns in Translated Day Names Table to use the sort
column DayNumber
Create a relationship between the Calendar table and the fact tables, such as Sales,
using the Date column to create a one-to-many relationship. The relationships created
between the Calendar table and the two translations tables are based on the
MonthNumber column and the DayNumber column.
After you create the required relationships with the Calendar table, create a new field
parameter for each of the two calendar translations tables. Creating a field parameter
for a calendar translation table is just like creating the field parameters for product
names and category names shown earlier.
Add a relationship between these new field parameter tables and the Languages table
to ensure the language filtering strategy works as expected.
After you create the field parameters for Translated Month Names and Translated Day
Names, you can begin to surface them in a report using cartesian visuals, tables, and
matrices.
After you set up everything, you can test your work using a report-level filter on the
Languages table to switch between languages and to verify translations for names of
months and days of the week work as expected.
Next steps
Load multiple-language reports
Load multiple-language reports
Article • 08/09/2023
To load multiple-language reports in the user's language, you can use bookmarks or
embed reports.
1. Create a separate bookmark for each language that supports data translations.
2. Disable Display and Current Page and only enable Data behavior.
3. To apply the bookmark for a specific language, supply a second parameter in the
report URL.
HTTP
?language=es&bookmarkGuid=Bookmark856920573e02a8ab1c2a
This report URL parameter is named bookmarkGuid. The filtering on the Languages
table is applied before any data is displayed to the user.
JavaScript
let config = {
type: "report",
id: reportId,
embedUrl: embedUrl,
accessToken: embedToken,
tokenType: models.TokenType.Embed,
localeSettings: { language: "de-DE" }
};
It's possible to apply a bookmark to an embedded a report. Instead, you can apply a
filter directly on the Languages table as the report loads using the Power BI JavaScript
API. There's no need to add bookmarks for filtering the Languages table if you only
intend to use a report by using Power BI embedding.
To apply a filtering during the loading process of an embedded report, register an event
handler for the loaded event. When you register an event handler for an embedded
report's loaded event, you can provide a JavaScript event handler that runs before the
rendering process begins. This approach makes the loaded event the ideal place to
register an event handler whose purpose is to apply the correct filtering on the
Languages table.
Here's an example of JavaScript code that registers an event handler for the loaded
event to apply a filter to the Languages table for Spanish.
JavaScript
});
Tip
When you set filters with the Power BI JavaScript API, you should prefer the
updateFilters method over the setFilters method. The updateFilters method
Next steps
Guidance for Power BI
On-premises data gateway sizing
Article • 02/27/2023
This article targets Power BI administrators who need to install and manage the on-
premises data gateway.
The gateway is required whenever Power BI must access data that isn't accessible
directly over the Internet. It can be installed on a server on-premises, or VM-hosted
Infrastructure-as-a-Service (IaaS).
Gateway workloads
The on-premises data gateway supports two workloads. It's important you first
understand these workloads before we discuss gateway sizing and recommendations.
For more information about Live Connection, see Datasets in the Power BI service
(Externally-hosted models).
For more information about DirectQuery, see Dataset modes in the Power BI
service (DirectQuery mode).
This workload requires CPU resources for routing queries and query results. Usually
there's much less demand for CPU than is required by the Cache data workload—
especially when it's required to transform data for caching.
Reliable, fast, and consistent connectivity is important to ensure report users have
responsive experiences.
Sizing considerations
Determining the correct sizing for your gateway machine can depend on the following
variables:
Generally, Live Connection and DirectQuery workloads require sufficient CPU, while
Cache data workloads require more CPU and memory. Both workloads depend on good
connectivity with the Power BI service, and the data sources.
7 Note
Power BI capacities impose limits on model refresh parallelism, and Live Connection
and DirectQuery throughput. There's no point sizing your gateways to deliver more
than what the Power BI service supports. Limits differ by Premium SKU (and
equivalently sized A SKU). For more information, see What is Power BI Premium?
(Capacity nodes).
Recommendations
Gateway sizing recommendations depend on many variables. In this section, we provide
you with general recommendations that you can take into consideration.
Initial sizing
It can be difficult to accurately estimate the right size. We recommend that you start
with a machine with at least 8 CPU cores, 8 GB of RAM, and multiple Gigabit network
adapters. You can then measure a typical gateway workload by logging CPU and
memory system counters. For more information, see Monitor and optimize on-premises
data gateway performance.
Connectivity
Plan for the best possible connectivity between the Power BI service and your gateway,
and your gateway and the data sources.
For more information, see Manage on-premises data gateway high-availability clusters
and load balancing.
Optimize data sources, model, and report designs—for more information, see
DirectQuery model guidance in Power BI Desktop
Create aggregations to cache higher-level results to reduce the number of
DirectQuery requests
Restrict Automatic page refresh intervals, in report designs and capacity settings
Especially when dynamic RLS is enforced, restrict dashboard cache update
frequency
Especially for smaller data volumes or for non-volatile data, convert the design to
an Import or Composite model
Next steps
For more information related to this article, check out the following resources:
7 Note
The Performance Analyzer cannot be used to monitor Premium Per User (PPU)
activities or capacity.
7 Note
SQL Server
SQL Server Analysis Services
Azure Analysis Services
U Caution
1. Open your Power BI Desktop report (so it will be easy to locate the port in the next
step, close any other open reports).
2. To determine the port being used by Power BI Desktop, in PowerShell (with
administrator privileges), or at the Command Prompt, enter the following
command:
PowerShell
netstat -b -n
The output will be a list of applications and their open ports. Look for the port
used by msmdsrv.exe, and record it for later use. It's your instance of Power BI
Desktop.
3. To connect SQL Server Profiler to your Power BI Desktop report:
a. Open SQL Server Profiler.
b. In SQL Server Profiler, on the File menu, select New Trace.
c. For Server Type, select Analysis Services.
d. For Server Name, enter localhost:[port recorded earlier].
e. Click Run—now the SQL Server Profiler trace is live, and is actively profiling
Power BI Desktop queries.
4. As Power BI Desktop queries are executed, you'll see their respective durations and
CPU times. Depending on the data source type, you may see other events
indicating how the query was executed. Using this information, you can determine
which queries are the bottlenecks.
A benefit of using SQL Server Profiler is that it's possible to save a SQL Server (relational)
database trace. The trace can become an input to the Database Engine Tuning Advisor.
This way, you can receive recommendations on how to tune your data source.
Next steps
For more information about this article, check out the following resources:
Query Diagnostics
Performance Analyzer
Troubleshoot report performance in Power BI
Power BI Premium Metrics app
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Troubleshoot report performance in
Power BI
Article • 02/27/2023
Slow reports can be identified by report users who experience reports that are slow to
load, or slow to update when interacting with slicers or other features. When reports are
hosted on a Premium capacity, slow reports can also be identified by monitoring the
Power BI Premium Metrics app. This app helps you to monitor the health and capacity of
your Power BI Premium subscription.
Terminator Action(s)
Manage capacity
Scale capacity
Architecture change
Consider Azure Analysis Services
Check on-premises gateway
Premium capacity
When the report is hosted on a Premium capacity, use the Power BI Premium Metrics
app to determine if the report-hosting capacity frequently exceeds capacity resources.
When there's pressure on resources, it may be time to manage or scale the capacity
(flowchart terminator 1). When there are adequate resources, investigate capacity
activity during typical report usage (flowchart terminator 2).
Shared capacity
When the report is hosted on shared capacity, it's not possible to monitor capacity
health. You'll need to take a different investigative approach.
First, determine if slow performance occurs at specific times of the day or month. If it
does—and many users are opening the report at these times—consider two options:
If you determine there's no time pattern, next consider if slow performance is isolated to
a specific geography or region. If it is, it's likely that the data source is remote and
there's slow network communication. In this case, consider:
Finally, if you determine there's no time pattern and slow performance occurs in all
regions, investigate whether slow performance occurs on specific devices, clients, or web
browsers. If it doesn't, use Power BI Desktop Performance Analyzer, as described earlier,
to optimize the report or model (flowchart terminator 5).
When you determine specific devices, clients, or web browsers contribute to slow
performance, we recommend creating a support ticket through the Power BI support
page (flowchart terminator 6).
Next steps
For more information about this article, check out the following resources:
Power BI guidance
Monitoring report performance
Performance Analyzer
Power BI adoption roadmap
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Lifecycle management best practices
Article • 08/10/2023
This article provides guidance for data & analytics creators who are managing their
content throughout its lifecycle in Microsoft Fabric. The article focuses on the use of Git
integration for source control and deployment pipelines as a release tool. For a general
guidance on Enterprise content publishing, Enterprise content publishing.
) Important
Development - Learn about the best ways of creating content in the deployment
pipelines development stage.
Test - Understand how to use a deployment pipelines test stage to test your
environment.
Content preparation
To best prepare your content for on-going management throughout its lifecycle, review
the information in this section before you:
To implement a secure and easy workflow, plan who gets access to each part of the
environments being used, both the Git repository and the dev/test/prod stages in a
pipeline. Some of the considerations to take into account are:
Who should have access to the source code in the Git repository?
Which operations should users with pipeline access be able to perform in each
stage?
Which branch are you connecting the workspace to? What’s the policy defined for
that branch?
Is the workspace shared by multiple team members? Should they make changes
directly in the workspace, or only through Pull requests?
Development
This section provides guidance for working with the deployment pipelines and using fit
for your development stage.
Make sure you have an isolated environment to work in, so others don’t override
your work before it gets committed. This means working in a Desktop tool (such as
VSCode , Power BI Desktop or others), or in a separate workspace that other
users can’t access.
Commit to a branch that you created and no other developer is using. If you’re
using a workspace as an authoring environment, read about working with
branches.
Commit together changes that must be deployed together. This advice applies for
a single item, or multiple items that are related to the same change. Committing all
related changes together can help you later when deploying to other stages,
creating pull requests, or reverting changes back.
Big commits might hit a max commit size limit. Be mindful of the number of items
you commit together, or the general size of an item. For example, reports can grow
large when adding large images. It’s bad practice to store large-size items in
source control systems, even if it works. Consider ways to reduce the size of your
items if they have lots of static resources, like images.
Undo button: The Undo operation is an easy and fast way to revert the immediate
changes you made, as long as they are not committed yet. You can also undo each
item separately. Read more about the undo operation.
As data isn’t stored in Git, consider that reverting a data item to an older version might
break the existing data and could possible require you to drop the data or the operation
might fail. Check this in advance before reverting changes back.
Setting up the workspace: Before you start, make sure you can create a new
workspace (if you don’t already have one), that you can assign it to a Fabric
capacity, and that you have access to data to work in your workspace.
Creating a new branch: Create a new branch from the main branch, so you’ll have
the most up-to-date version of your content. Also make sure you connect to the
correct folder in the branch, so you can pull the right content into the workspace.
Small, frequent changes: It's a Git best practice to make small incremental
changes that are easy to merge and less likely to get into conflicts. If that’s not
possible, make sure to update your branch from main so you can resolve conflicts
on your own first.
If a developer set up a private workspace with all required configurations, they can
continue to use that workspace for any future branch they create. When a sprint is
over, your changes are merged and you are starting a fresh new task, just switch
the connection to a new branch on the same workspace. You can also do this if you
suddenly need to fix a bug in the middle of a sprint. Think of it as a working
directory on the web.
Developers using a client tool (such as VSCode, Power BI Desktop or others), don’t
necessarily need a workspace. They can create branches and commit changes to
that branch locally, push those to the remote repo and create a pull request to the
main branch, all without a workspace. A workspace is needed only as a testing
environment to check that everything works in a real-life scenario. It's up to you to
decide when that should happen.
Test
This section provides guidance for working with a deployment pipelines test stage.
Make sure that these three factors are addressed in your test environment:
Data volume
Usage volume
When testing, you can use the same capacity as the production stage. However, using
the same capacity can make production unstable during load testing. To avoid unstable
production, test using a different capacity similar in resources to the production
capacity. To avoid extra costs, use a capacity where you can pay only for the testing
time.
You can easily find the related items by using impact analysis.
As different items have different capabilities when it comes to retaining data when
changes to the definition are applied, be mindful when applying the changes. Some
practices that can help you apply the changes in the safest way:
Know in advance what the changes are and what their impact might be on the
existing data. Use commit messages to describe the changes made.
Upload the changes first to a dev or test environment, to see how that item
handles the change with test data.
Consider the best timing when updating the Prod environment to minimize the
damage that any errors might cause to your business users who consume the data.
) Important
The deployment process doesn't include updating the app content or settings. To
apply changes to content or settings, manually update the app in the required
pipeline stage.
Production
This section provides guidance to the deployment pipelines production stage.
In addition, limit access to the repo or pipeline by only enabling permissions to users
that are part of the content creation process.
Make sure that you set production deployment rules for data sources and parameters
defined in the dataset.
If your build or release pipeline requires you to change the source code, or run
scripts in a build environment before deployment to the workspace, then
connecting the workspace to Git won't help you.
After deploying to each stage, make sure to change all the configuration specific
to that stage.
If you are using deployment from Git, we recommend following the practices described
in Adopt a Git branching strategy.
Next steps
End to end lifecycle management tutorial
Get started with Git integration
Get started with deployment pipelines
Feedback
Was this page helpful? Yes No
This article targets Power BI administrators who need to access and analyze data
sourced from the Power BI activity log. It focuses on the programmatic retrieval of
Power BI activities by using the Get-PowerBIActivityEvent cmdlet from the Power BI
Management module. Up to 30 days of history is available. This cmdlet uses the Get
Activity Events Power BI REST API operation, which is an admin API. PowerShell cmdlets
add a layer of abstraction on top of the underlying APIs. Therefore, the PowerShell
cmdlet simplifies access to the Power BI activity log.
There are other manual and programmatic ways to retrieve Power BI activities. For more
information, see the Access user activity data.
Analyzing the Power BI activity log is crucial for governance, compliance, and to track
adoption efforts. For more information about the Power BI activity log, see Track user
activities in Power BI.
Tip
We recommend that you fully review the Tenant-level auditing article. This article
covers planning, key decisions, prerequisites, and key solution development
activities to consider when building an end-to-end auditing solution.
Examples available
The goal of this article is to provide you with examples to help get you started. The
examples include scripts that retrieve data from the activity log by using the Power BI
Management PowerShell module.
2 Warning
The scripts aren't production-ready because they're intended only for educational
purposes. You can, however, adapt the scripts for production purposes by adding
logic for logging, error handling, alerting, and refactoring for code reuse and
modularization.
Because they're intended for learning, the examples are simplistic, yet they're real-world.
We recommend that you review all examples to understand how they apply slightly
different techniques. Once you identify the type of activity data that you need, you can
mix and match the techniques to produce a script that best suits your requirements.
View three activities for N days Create app, update app, and install app
For simplicity, most of the examples output their result to the screen. For instance, in
Visual Studio Code, the data is output to the terminal panel , which holds a buffer set
of data in memory.
Most of the examples retrieve raw JSON data. Working with the raw JSON data has
many advantages.
All of the information that's available for each activity event is returned. That's
helpful for you to learn what data is available. Keep in mind that the contents of an
API response differs depending on the actual activity event. For example, the data
available for a CreateApp event is different to the ViewReport event.
Because data that's available in the activity log changes as Power BI evolves over
time, you can expect the API responses to change too. That way, new data that's
introduced won't be missed. Your process is also more resilient to change and less
likely to fail.
The details of an API response can differ for the Power BI commercial cloud and
the national/regional clouds.
If you have different team members (such as data engineers) who get involved
with this process, simplifying the initial process to extract the data makes it easier
for multiple teams to work together.
Tip
We recommend that you keep your scripts that extract data as simple as possible.
Therefore, avoid parsing, filtering, or formatting the activity log data as it's
extracted. This approach uses an ELT methodology, which has separate steps to
Extract, Load, and Transform data. This article only focuses on the first step, which is
concerned with extracting the data.
Requirements
To use the example scripts, you must meet the following requirements.
PowerShell client tool: Use your preferred tool for running PowerShell commands.
All examples were tested by using the PowerShell extension for Visual Studio
Code with PowerShell 7. For information about client tools and PowerShell
versions, see Tenant-level auditing.
Power BI Management module: Install all Power BI PowerShell modules. If you
previously installed them, we recommend that you update the modules to ensure
that you're using the latest published version.
Power BI administrator role: The example scripts are designed to use an
interactive authentication flow. Therefore, the user running the PowerShell example
scripts must sign in to use the Power BI REST APIs. To retrieve activity log data, the
authenticating user must belong to the Power BI administrator role (because
retrieving activity events is done with an admin API). Service principal
authentication is out of scope for these learning examples.
The remainder of this article includes sample scripts that show you different ways to
retrieve activity log data.
Sample request 1
The first script redirects you to a browser to complete the sign in process. User accounts
that have multi-factor authentication (MFA) enabled are able to use this interactive
authentication flow to sign in.
PowerShell
Connect-PowerBIServiceAccount
) Important
Users without Power BI administrator privileges can't run any of the sample scripts
that follow in this article. Power BI administrators have permission to manage the
Power BI service and to retrieve tenant-wide metadata (such as activity log data).
Although using service principal authentication is out of scope for these examples,
we strongly recommend that you set up a service principal for production-ready,
unattended scripts that will run on a schedule.
Tip
When extracting data from the activity log by using the PowerShell cmdlet, each
request can extract data for one day (a maximum of 24 hours). Therefore, the goal
of this example is to start simply by checking one user for one day. There are other
examples later in this article that show you how to use a loop to export data for
multiple days.
Sample request 2
This script declares two PowerShell variables to make it easier to reuse the script:
$UserEmailAddr : The email address for the user you're interested in.
$ActivityDate : The date you're interested in. The format is YYYY-MM-DD (ISO
8601 format). You can't request a date earlier than 30 days before the current date.
PowerShell
7 Note
You might notice a backtick (`) character at the end of some of the lines in the
PowerShell scripts. In PowerShell, one way you can use the backtick character is as a
line continuation character. We've used it to improve the readability of the scripts in
this article.
Tip
Sample response 2
Here's a sample JSON response. It includes two activities that the user performed:
JSON
[
{
"Id": "10af656b-b5a2-444c-bf67-509699896daf",
"RecordType": 20,
"CreationTime": "2023-03-15T15:18:30Z",
"Operation": "ViewReport",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100FFF92C7717B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"Activity": "ViewReport",
"ItemName": "Gross Margin Analysis",
"WorkSpaceName": "Sales Analytics",
"DatasetName": "Sales Data",
"ReportName": "Gross Margin Analysis",
"WorkspaceId": "e380d1d0-1fa6-460b-9a90-1a5c6b02414c",
"ObjectId": "Gross Margin Analysis",
"DatasetId": "cfafbeb1-8037-4d0c-896e-a46fb27ff229",
"ReportId": "94e57e92-Cee2-486d-8cc8-218c97200579",
"ArtifactId": "94e57e92-Cee2-486d-8cc8-218c97200579",
"ArtifactName": "Gross Margin Analysis",
"IsSuccess": true,
"ReportType": "PowerBIReport",
"RequestId": "53451b83-932b-f0b0-5328-197133f46fa4",
"ActivityId": "beb41a5d-45d4-99ee-0e1c-b99c451e9953",
"DistributionMethod": "Workspace",
"ConsumptionMethod": "Power BI Web",
"SensitivityLabelId": "e3dd4e72-5a5d-4a95-b8b0-a0b52b827793",
"ArtifactKind": "Report"
},
{
"Id": "5c913f29-502b-4a1a-a089-232edaf176f7",
"RecordType": 20,
"CreationTime": "2023-03-15T17:22:00Z",
"Operation": "ExportActivityEvents",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 2,
"UserKey": "100FFF92C7717B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "MicrosoftPowerBIMgmt/1.2.1111.0",
"Activity": "ExportActivityEvents",
"IsSuccess": true,
"RequestId": "2af6a22d-6f24-4dc4-a26a-5c234ab3afad",
"ActivityId": "00000000-0000-0000-0000-000000000000",
"ExportEventStartDateTimeParameter": "2023-03-17T00:00:00Z",
"ExportEventEndDateTimeParameter": "2023-03-17T23:59:59.999Z"
}
]
Tip
Extracting the Power BI activity log data is also a logged operation, as shown in the
previous response. When you analyze user activities, you might want to omit
administrator activities—or analyze them separately.
Example 3: View an activity for N days
Sometimes you might want to investigate one specific type of activity for a series of
days. This example shows how to retrieve per-item report sharing activities. It uses a
loop to retrieve activities from the previous seven days.
Sample request 3
The script declares two variables:
$ActivityType : The operation name for the activity that you're investigating.
$NbrOfDaysToCheck : How many days you're interested in checking. It performs a
loop working backward from the current day. The maximum value allowed is 30
days (because the earliest date that you can retrieve is 30 days before the current
day).
PowerShell
Tip
You can use this looping technique to check any of the operations recorded in the
activity log.
Sample response 3
Here's a sample JSON response. It includes two activities that the user performed:
The first activity shows that a sharing link for a user was created. Note that the
SharingAction value differs depending on whether the user created a link, edited a
link, or deleted a link. For brevity, only one type of sharing link activity is shown in
the response.
The second activity shows that direct access sharing for a group was created. Note
that the SharingInformation value differs depending on the action taken. For
brevity, only one type of direct access sharing activity is shown in the response.
JSON
[
{
"Id": "be7506e1-2bde-4a4a-a210-bc9b156142c0",
"RecordType": 20,
"CreationTime": "2023-03-15T19:52:42Z",
"Operation": "ShareReport",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "900GGG12D2242A",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/110.0",
"Activity": "ShareReport",
"ItemName": "Call Center Stats",
"WorkSpaceName": "Sales Analytics",
"SharingInformation": [
{
"RecipientEmail": "[email protected]",
"RecipientName": "Turner",
"ObjectId": "fc9bbc6c-e39b-44cb-9c8a-d37d5665ec57",
"ResharePermission": "ReadReshare",
"UserPrincipalName": "[email protected]"
}
],
"WorkspaceId": "e380d1d0-1fa6-460b-9a90-1a5c6b02414c",
"ObjectId": "Call Center Stats",
"Datasets": [
{
"DatasetId": "fgagrwa3-9044-3e1e-228f-k24bf72gg995",
"DatasetName": "Call Center Data"
}
],
"ArtifactId": "81g22w11-vyy3-281h-1mn3-822a99921541",
"ArtifactName": "Call Center Stats",
"IsSuccess": true,
"RequestId": "7d55cdd3-ca3d-a911-5e2e-465ac84f7aa7",
"ActivityId": "4b8b53f1-b1f1-4e08-acdf-65f7d3c1f240",
"SharingAction": "CreateShareLink",
"ShareLinkId": "J_5UZg-36m",
"ArtifactKind": "Report",
"SharingScope": "Specific People"
},
{
"Id": "b4d567ac-7ec7-40e4-a048-25c98d9bc304",
"RecordType": 20,
"CreationTime": "2023-03-15T11:57:26Z",
"Operation": "ShareReport",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "900GGG12D2242A",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "69.132.26.0",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "ShareReport",
"ItemName": "Gross Margin Analysis",
"WorkSpaceName": "Sales Analytics",
"SharingInformation": [
{
"RecipientName": "SalesAndMarketingGroup-NorthAmerica",
"ObjectId": "ba21f28b-6226-4296-d341-f059257a06a7",
"ResharePermission": "Read"
}
],
"CapacityId": "1DB44EEW-6505-4A45-B215-101HBDAE6A3F",
"CapacityName": "Shared On Premium - Reserved",
"WorkspaceId": "e380d1d0-1fa6-460b-9a90-1a5c6b02414c",
"ObjectId": "Gross Margin Analysis",
"Datasets": [
{
"DatasetId": "cfafbeb1-8037-4d0c-896e-a46fb27ff229",
"DatasetName": "Sales Data"
}
],
"ArtifactId": "94e57e92-Cee2-486d-8cc8-218c97200579",
"ArtifactName": "Gross Margin Analysis",
"IsSuccess": true,
"RequestId": "82219e60-6af0-0fa9-8599-c77ed44fff9c",
"ActivityId": "1d21535a-257e-47b2-b9b2-4f875b19855e",
"SensitivityLabelId": "16c065f5-ba91-425e-8693-261e40ccdbef",
"SharingAction": "Direct",
"ArtifactKind": "Report",
"SharingScope": "Specific People"
}
]
7 Note
This JSON response shows that the data structure is different based on the type of
event. Even the same type of event can have different characteristics that produce a
slightly different output. As recommended earlier in this article, you should get
accustomed to retrieving the raw data.
Sample request 4
The script declares the following variables:
You can only retrieve activity events for one activity at a time. So, the script searches for
each operation separately. It combines the search results into a variable named
$FullResults , which it then outputs to the screen.
U Caution
Running many loops many times greatly increases the likelihood of API throttling.
Throttling can happen when you exceed the number of requests you're allowed to
make in a given time period. The Get Activity Events operation is limited to 200
requests per hour. When you design your scripts, take care not to retrieve the
original data more times than you need. Generally, it's a better practice to extract
all of the raw data once per day and then query, transform, filter, or format that
data separately.
The script shows results for the current day.
7 Note
To retrieve results for the previous day only—avoiding partial day results—see the
Export all activities for previous N days example.)
PowerShell
#Get activity 1 and append its results into the full resultset:
$Activity1Results = @()
$Activity1Results += Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate+'T00:00:00.000') `
-EndDateTime ($ActivityDate+'T23:59:59.999') `
-ActivityType $Activity1 | ConvertFrom-Json
If ($null -ne $Activity1Results) {$FullResults += $Activity1Results}
#Get activity 2 and append its results into the full resultset:
$Activity2Results = @()
$Activity2Results += Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate+'T00:00:00.000') `
-EndDateTime ($ActivityDate+'T23:59:59.999') `
-ActivityType $Activity2 |
ConvertFrom-Json
If ($null -ne $Activity2Results) {$FullResults += $Activity2Results}
#Get activity 3 and append its results into the full resultset:
$Activity3Results = @()
$Activity3Results += Get-PowerBIActivityEvent `
-StartDateTime ($ActivityDate+'T00:00:00.000') `
-EndDateTime ($ActivityDate+'T23:59:59.999') `
-ActivityType $Activity3 |
ConvertFrom-Json
If ($null -ne $Activity3Results) {$FullResults += $Activity3Results}
}
#Convert all of the results back to a well-formed JSON object:
$FullResults = $FullResults | ConvertTo-Json
Sample response 4
Here's a sample JSON response. It includes three activities that the user performed:
2 Warning
The response only includes the user permissions that were modified. For example,
it's possible that three audiences could've been created in a CreateApp event. In the
UpdateApp event, if only one audience changed, then only one audience would
appear in the OrgAppPermission data. For that reason, relying on the UpdateApp
event for tracking all app permissions is incomplete because the activity log only
shows what's changed.
For a snapshot of all Power BI app permissions, use the Get App Users as Admin
API operation instead.
JSON
[
{
"Id": "65a26480-981a-4905-b3aa-cbb3df11c7c2",
"RecordType": 20,
"CreationTime": "2023-03-15T18:42:13Z",
"Operation": "CreateApp",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100FFF92C7717B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "CreateApp",
"ItemName": "Sales Reconciliations App",
"WorkSpaceName": "Sales Reconciliations",
"OrgAppPermission": {
"recipients": "Sales Reconciliations App(Entire Organization)",
"permissions": "Sales Reconciliations App(Read,CopyOnWrite)"
},
"WorkspaceId": "9325a31d-067e-4748-a592-626d832c8001",
"ObjectId": "Sales Reconciliations App",
"IsSuccess": true,
"RequestId": "ab97a4f1-9f5e-4a6f-5d50-92c837635814",
"ActivityId": "9bb54a9d-b688-4028-958e-4d7d21ca903a",
"AppId": "42d60f97-0f69-470c-815f-60198956a7e2"
},
{
"Id": "a1dc6d26-b006-4727-bac6-69c765b7978f",
"RecordType": 20,
"CreationTime": "2023-03-16T18:39:58Z",
"Operation": "UpdateApp",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100GGG12F9921B",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "UpdateApp",
"ItemName": "Sales Analytics",
"WorkSpaceName": "Sales Analytics",
"OrgAppPermission": {
"recipients": "Sales Reps Audience(SalesAndMarketingGroup-
NorthAmerica,SalesAndMarketingGroup-Europe)",
"permissions": "Sales Reps Audience(Read,CopyOnWrite)"
},
"WorkspaceId": "c7bffcd8-8156-466a-a88f-0785de2c8b13",
"ObjectId": "Sales Analytics",
"IsSuccess": true,
"RequestId": "e886d122-2c09-4189-e12a-ef998268b864",
"ActivityId": "9bb54a9d-b688-4028-958e-4d7d21ca903a",
"AppId": "c03530c0-db34-4b66-97c7-34dd2bd590af"
},
{
"Id": "aa002302-313d-4786-900e-e68a6064df1a",
"RecordType": 20,
"CreationTime": "2023-03-17T18:35:22Z",
"Operation": "InstallApp",
"OrganizationId": "927c6607-8060-4f4a-a5f8-34964ac78d70",
"UserType": 0,
"UserKey": "100HHH12F4412A",
"Workload": "PowerBI",
"UserId": "[email protected]",
"ClientIP": "192.168.1.1",
"UserAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0)
Gecko/20100101 Firefox/111.0",
"Activity": "InstallApp",
"ItemName": "Sales Reconciliations App",
"ObjectId": "Sales Reconciliations App",
"IsSuccess": true,
"RequestId": "7b3cc968-883f-7e13-081d-88b13f6cfbd8",
"ActivityId": "9bb54a9d-b688-4028-958e-4d7d21ca903a"
}
]
Sample request 5
The script declares two variables:
$ActivityDate : The date you're interested in. The format is YYYY-MM-DD. You
can't request a date earlier than 30 days before the current date.
$WorkspaceName : The name of the workspace you're interested in.
The script stores the results in the $Results variable. It then converts the JSON data to
an object so the results can be parsed. It then filters the results to retrieve five specific
columns. The CreationTime data is renamed as ActivityDateTime. The results are filtered
by the workspace name, then output to the screen.
There isn't a parameter for the Get-PowerBIActivityEvent cmdlet that allows you to
specify a workspace when checking the activity log (earlier examples in this article used
PowerShell parameters to set a specific user, date, or activity name). In this example, the
script retrieves all of the data and then parses the JSON response to filter the results for
a specific workspace.
U Caution
PowerShell
Sample response 5
Here are the filtered results, which include a small subset of properties. The format is
easier to read for occasional analysis. However, we recommend that you convert it back
to JSON format if you plan to store the results.
7 Note
After converting the JSON results to a PowerShell object, time values are converted
to local time. The original audit data is always recorded in Coordinated Universal
Time (UTC) time, so we recommend that you get accustomed to using only UTC
time.
Output
Tip
You can use this technique to filter results by any property in the results. For
example, you can use a specific event RequestId to analyze just one specific event.
) Important
Activity log data is available for a maximum of 30 days. It's important that you
export and retain the data so you can do historical analysis. If you don't currently
export and store the activity log data, we strongly recommend that you prioritize
doing so.
Sample request 6
The script retrieves all activities for a series of days. It declares three variables:
must exist before running the script. Don't include a backslash (\) character at the
end of the folder path (because it's automatically added at runtime). We
recommend that you use a separate folder to store raw data files.
$ExportFileName : The prefix for each file name. Because one file per day is saved,
the script adds a suffix to indicate the data contained in the file, and the date and
time the data was retrieved. For example, if you ran a script at 9am (UTC) on April
25, 2023 to extract activity data for April 23, 2023, the file name would be:
PBIActivityEvents-20230423-202304250900. Although the folder structure where
it's stored is helpful, each file name should be fully self-describing.
We recommend that you extract data that's at least one day before the current day. That
way, you avoid retrieving partial day events, and you can be confident that each export
file contains the full 24 hours of data.
The script gathers up to 30 days of data, through to the previous day. Timestamps for
audited events are always in UTC. We recommend that you build all of your auditing
processes based on UTC time rather than your local time.
The script produces one JSON file per day. The suffix of the file name includes the
timestamp (in UTC format) of the extracted data. If you extract the same day of data
more than once, the suffix in the file name helps you identify the newer file.
PowerShell
[string]$DateToExtractLabel=$DateToExtractUTC.ToString("yyyy-MM-dd")
The cmdlet allows you to request one day of activity each time you make a call by
using the cmdlet. Whereas when you communicate with the API directly, you can
only request one hour per API request.
The cmdlet handles continuation tokens for you. If you use the API directly, you
need to check the continuation token to determine whether there are any more
results to come. Some APIs need to use pagination and continuation tokens for
performance reasons when they return a large amount of data. They return the
first set of records, then with a continuation token you can make a subsequent API
call to retrieve the next set of records. You continue calling the API until a
continuation token isn't returned. Using the continuation token is a way to
consolidate multiple API requests so that you can consolidate a logical set of
results. For an example of using a continuation token, see Activity Events REST API.
The cmdlet handles Azure Active Directory (Azure AD) access token expirations for
you. After you've authenticated, your access token expires after one hour (by
default). In this case, the cmdlet automatically requests a refresh token for you. If
you communicate with the API directly, you need to request a refresh token.
7 Note
A sample response is omitted because it's an output similar to the responses shown
in the previous examples.
Next steps
For more information related to this article, check out the following resources:
7 Note
We also recommend that you thoroughly read the Power BI adoption roadmap
and Power BI implementation planning articles.
There are two assumptions: Your organization has a legacy BI platform currently in place
and the decision has been made to formally migrate content and users to Power BI.
Migrating to the Power BI service is the primary focus of this series. Additional
considerations may apply for national/regional cloud customers beyond what is
discussed in this series of articles.
The following diagram shows four high-level phases for deploying Power BI in your
organization.
Phase Description
Set up and evaluate Power BI. The first phase involves establishing the initial Power BI
architecture. Preliminary deployment and governance planning are handled at this point,
as well as Power BI evaluations including return on investment and/or cost benefit
analysis.
Create new solutions quickly in Power BI. In the second phase, self-service BI authors
can begin using and evaluating Power BI for their needs, and value can be obtained from
Power BI quickly. Activities in Phase 2 place importance on agility and rapid business
value, which is critical to gaining acceptance for the selection of a new BI tool such as
Power BI. For this reason, the diagram depicts activities in Phase 2 happening side by
side with the migration activities in Phase 3.
Migrate BI assets from legacy platform to Power BI. The third phase addresses the
migration to Power BI. It's the focus of this series of Power BI migration articles. Five
specific migration stages are discussed in the next section.
Adopt, govern, and monitor Power BI. The final phase comprises ongoing activities such
as nurturing a data culture, communication, and training. These activities greatly impact
on an effective Power BI implementation. It's important to have governance and security
policies and processes that are appropriate for your organization, as well as auditing and
monitoring to allow you to scale, grow, and continually improve.
) Important
Pre-migration steps
Stage 1: Gather requirements and prioritize
Stage 2: Plan for deployment
Stage 3: Conduct proof of concept
Stage 4: Create and validate content
Stage 5: Deploy, support, and monitor
Pre-migration steps
The pre-migration steps include actions you may consider prior to beginning a project
to migrate content from a legacy BI platform to Power BI. It typically includes the initial
tenant-level deployment planning. For more information about these activities, see
Prepare to migrate to Power BI.
Tip
Acknowledgments
This series of articles was written by Melissa Coates, Data Platform MVP and owner of
Coates Data Strategies . Contributors and reviewers include Marc Reguera, Venkatesh
Titte, Patrick Baumgartner, Tamer Farag, Richard Tkachuk, Matthew Roche, Adam Saxton,
Chris Webb, Mark Vaillancourt, Daniel Rubiolo, David Iseminger, and Peter Myers.
Next steps
In the next article in this Power BI migration series, learn about the pre-migration steps
when migrating to Power BI.
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Prepare to migrate to Power BI
Article • 02/27/2023
This article describes actions you can consider prior to migrating to Power BI.
7 Note
The output from the pre-migration steps includes an initial governance model, initial
high-level deployment planning, in addition to an inventory of the reports and data to
be migrated. Additional information from activities in Stages 1, 2, and 3 will be
necessary to fully estimate the level of effort for migrating individual solutions.
Tip
Most of the topics discussed in this article also apply to a standard Power BI
implementation project.
Tip
If you fear you're starting to overcommunicate, then it's probably just about right.
Specific goals for Power BI adoption and where Power BI fits into the overall BI
strategy for the organization.
How the Power BI administrator role will be handled, particularly in decentralized
organizations.
Policies related to achieving trusted data: use of authoritative data sources,
addressing data quality issues, and use of consistent terminology and common
definitions.
Security and data privacy strategy for data sources, data models, reports, and
content delivery to internal and external users.
How internal and external compliance, regulatory, and audit requirements will be
met.
) Important
The most effective governance model strives to balance user empowerment with
the necessary level of control. See more information, read about discipline at the
core and flexibility at the edge.
Note that Stage 2 references solution-level deployment planning. The Stage 2 activities
should respect the organizational-level decisions whenever possible.
) Important
What are the specific motivations and objectives for this migration? For more
information, see Power BI migration overview (Consider migration reasons). This
article describes the most common reasons for migrating to Power BI. Certainly,
your objectives should be specified at the organizational level. Beyond that,
migrating one legacy BI solution may benefit significantly from cost savings,
whereas migrating a different legacy BI solution may focus on gaining workflow
optimization benefits.
What's the expected cost/benefit or ROI for this migration? Having a clear
understanding of expectations related to cost, increased capabilities, decreased
complexity, or increased agility, is helpful in measuring success. It can provide
guiding principles to help with decision-making during the migration process.
What key performance indicators (KPIs) will be used to measure success? The
following list presents some example KPIs:
Number of reports rendered from legacy BI platform, decreasing month over
month.
Number of reports rendered from Power BI, increasing month over month.
Number of Power BI report consumers, increasing quarter over quarter.
Percentage of reports migrated to production by target date.
Cost reduction in licensing cost year over year.
Tip
The Power BI activity log can be used as a source for measuring KPI progress.
1. Inventory of reports: Compile a list of reports and dashboards that are migration
candidates.
2. Inventory of data sources: Compile a list of all data sources accessed by existing
reports. It should include both enterprise data sources as well as departmental and
personal data sources. This process may unearth data sources not previously
known to the IT department, often referred to as shadow IT.
3. Audit log: Obtain data from the legacy BI platform audit log to understand usage
patterns and assist with prioritization. Important information to obtain from the
audit log includes:
7 Note
In many cases, the content isn't migrated to Power BI exactly as is. The migration
represents an opportunity to redesign the data architecture and/or improve report
delivery. Compiling an inventory of reports is crucial to understanding what
currently exists so you can begin to assess what refactoring needs to occur. The
remaining articles in this series describe possible improvements in more detail.
Compiling the existing inventory of data and reports is a possible candidate for
automation when you have an existing tool that can do it for you. The extent to which
automation can be used for some portions of the migration process—such as compiling
the existing inventory—highly depends upon the tools you have.
Next steps
In the next article in this Power BI migration series, learn about Stage 1, which is
concerned with gathering and prioritizing requirements when migrating to Power BI.
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Gather requirements to migrate to
Power BI
Article • 02/27/2023
This article describes Stage 1, which is concerned with gathering and prioritizing
requirements when migrating to Power BI.
7 Note
The output from Stage 1 includes detailed requirements that have been prioritized.
However, additional activities in Stages 2 and 3 must be completed to fully estimate the
level of effort.
) Important
Stages 1-5 represent activities related to one specific solution. There are decisions
and activities at the organizational/tenant level which impact the process at the
solution level. Some of those higher-level planning activities are discussed in the
Power BI migration overview article. When appropriate, defer to the
organizational-level decisions for efficiency and consistency.
The Power BI adoption roadmap describes these types of strategic and tactical
considerations. It has an emphasis on organizational adoption.
Tip
Most of the topics discussed in this article also apply to a standard Power BI
implementation project.
Compile requirements
The inventory of existing BI items, compiled in the pre-migration steps, become the
input for the requirements of the new solution to be created in Power BI. Collecting
requirements is about understanding the current state, as well as what items users
would like changed or refactored when reports are redesigned in Power BI. Detailed
requirements will useful for solution deployment planning in Stage 2, during creation of
a proof of concept in Stage 3, and when creating the production-ready solution in Stage
4.
Purpose, audience, and expected action: Identify the purpose and business
process applicable to each report, as well as the audience, analytical workflow, and
expected action to be taken by report consumers.
How consumers use the report: Consider sitting with report consumers of the
existing report to understand exactly what they do with it. You may learn that
certain elements of the report can be eliminated or improved in the new Power BI
version. This process involves additional time investment but it's valuable for
critical reports or reports that are used often.
Owner and subject matter expert: Identify the report owner and any subject
matter expert(s) associated with the report or data domain. They may become the
owners of the new Power BI report going forward. Include any specific change
management requirements (which typically differ between IT-managed and
business-managed solutions) as well as approvals and sign-offs, which will be
required when changes are made in the future. For more information, see this
article.
Content delivery method: Clarify report consumer expectations for content
delivery. It may be on-demand, interactive execution, embedded within a custom
application, or delivery on a schedule using an e-mail subscription. There may also
be requirements to trigger alert notifications.
Interactivity needs: Determine must-have and nice-to-have interactivity
requirements, such as filters, drill-down actions, or drillthrough actions.
Data sources: Ensure all data sources required by the report are discovered, and
data latency needs (data freshness) are understood. Identify historical data,
trending, and data snapshot requirements for each report so they can be aligned
with the data requirements. Data source documentation can also be useful later on
when performing data validation of a new report with its source data.
Security requirements: Clarify security requirements (such as allowed viewers,
allowed editors, and any row-level security needs), including any exceptions to
normal organizational security. Document any data sensitivity level, data privacy, or
regulatory/compliance needs.
Calculations, KPIs, and business rules: Identify and document all calculations, KPIs,
and business rules that are currently defined within the existing report so they can
be aligned with the data requirements.
Usability, layout, and cosmetic requirements: Identify specific usability, layout,
and cosmetic needs related to data visualizations, grouping and sorting
requirements, and conditional visibility. Include any specific considerations related
to mobile device delivery.
Printing and exporting needs: Determine whether there are any requirements
specific to printing, exporting, or pixel-perfect layout. These needs will influence
which type of report will be most suitable (such as a Power BI, Excel, or paginated
report). Be aware that report consumers tend to place a lot of importance on how
they've always done things, so don't be afraid to challenge their way of thinking.
Be sure to talk in terms of enhancements rather than change.
Risks or concerns: Determine whether there are other technical or functional
requirements for reports, as well as any risks or concerns regarding the information
being presented in them.
Open issues and backlog items: Identify any future maintenance, known issues, or
deferred requests to add to the backlog at this time.
Tip
Existing queries: Identify whether there are existing report queries or stored
procedures that can be used by a DirectQuery model or a Composite model, or
can be converted to an Import model.
Types of data sources: Compile the types of data sources that are necessary,
including centralized data sources (such as an enterprise data warehouse) as well
as non-standard data sources (such as flat files or Excel files that augment
enterprise data sources for reporting purposes). Finding where data sources are
located, for purposes of data gateway connectivity, is important too.
Data structure and cleansing needs: Determine the data structure for each
requisite data source, and to what extent data cleansing activities are necessary.
Data integration: Assess how data integration will be handled when there are
multiple data sources, and how relationships can be defined between each model
table. Identify specific data elements needed to simplify the model and reduce its
size.
Acceptable data latency: Determine the data latency needs for each data source. It
will influence decisions about which data storage mode to use. Data refresh
frequency for Import model tables is important to know too.
Data volume and scalability: Evaluate data volume expectations, which will factor
into decisions about large model support and designing DirectQuery or Composite
models. Considerations related to historical data needs are essential to know too.
For larger datasets, determining incremental data refresh will also be necessary.
Measures, KPIs, and business rules: Assess needs for measures, KPIs, and business
rules. They will impact decisions regarding where to apply the logic: in the dataset
or the data integration process.
Master data and data catalog: Consider whether there are master data issues
requiring attention. Determine if integration with an enterprise data catalog is
appropriate for enhancing discoverability, accessing definitions, or producing
consistent terminology accepted by the organization.
Security and data privacy: Determine whether there are any specific security or
data privacy considerations for datasets, including row-level security requirements.
Open issues and backlog items: Add any known issues, known data quality
defects, future maintenance, or deferred requests to the backlog at this time.
) Important
Data reusability can be achieved with shared datasets, which can optionally be
certified to indicate trustworthiness and improve discoverability. Data preparation
reusability can be achieved with dataflows to reduce repetitive logic in multiple
datasets. Dataflows can also significantly reduce the load on source systems
because the data is retrieved less often—multiple datasets can then import data
from the dataflow.
7 Note
For more information about centralization of data models, read about discipline at
the core and flexibility at the edge.
Next steps
In the next article in this Power BI migration series, learn about Stage 2, which is
concerned with planning the migration for a single Power BI solution.
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Plan deployment to migrate to Power BI
Article • 03/14/2023
This article describes Stage 2, which is concerned with planning the migration for a
single Power BI solution.
7 Note
The focus of Stage 2 is on defining how the requirements that were defined in Stage 1
are used to migrate a solution to Power BI.
The output from Stage 2 includes as many specific decisions as possible to guide the
deployment process.
Decision-making of this nature is an iterative and non-linear process. Some planning will
have already occurred in the pre-migration steps. Learnings from a proof of concept
(described in Stage 3) may occur in parallel with deployment planning. Even while
creating the solution (described in Stage 4), additional information may arise that
influences deployment decisions.
) Important
Stages 1-5 represent activities related to one specific solution. There are decisions
and activities at the organizational/tenant level which impact the process at the
solution level. Some of those higher-level planning activities are discussed in the
Power BI migration overview article. When appropriate, defer to the
organizational-level decisions for efficiency and consistency.
Tip
For more information about architectural considerations, see Section 3 of the Planning a
Power BI enterprise deployment whitepaper .
U Caution
If you're tempted to rely on using Power BI Desktop files stored in a file system, be
aware that it's not an optimal approach. Using the Power BI service (or Power BI
Report Server) has significant advantages for security, content distribution, and
collaboration. The ability to audit and monitor activities is also enabled by the
Power BI service.
Tip
Consider creating a workspace for a specific business activity or project. You may
be tempted to start off structuring workspaces based on your organizational
structure (such as a workspace per department), but this approach frequently ends
up being too broad.
Will a Power BI app (which comprises reports and dashboards from a single
workspace) be the best way to deliver content to consumers, or will direct access
to a workspace be sufficient for content viewers?
Will certain reports and dashboards be embedded elsewhere, such as Teams,
SharePoint Online, or a secure portal or website?
Will consumers access content using mobile devices? Requirements to deliver
reports to small form factor devices will influence some report design decisions.
Will consumers be allowed to create new reports from the published dataset? This
capability can be enabled by assigning dataset build permission to a user.
If consumers want to customize a report, can they save a copy of it and personalize
it to meet their needs?
U Caution
Although the Save a copy capability is a nice feature, it should be used with caution
when the report includes certain graphics or header/footer messages. Since logos,
icons, and textual messages often relate to branding requirements or regulatory
compliance, it's important to carefully control how they're delivered and
distributed. If Save a copy is used, but the original graphics or header/footer
messages remain unchanged by the new author, it can result in confusion about
who actually produced the report. It can also reduce the meaningfulness of the
branding.
Content can be accessed by consumers who don't have a Power BI Pro or Premium
Per User (PPU) license.
Support for large datasets.
Support for more frequent data refreshes.
Support for using the full feature set of dataflows.
Enterprise features, including deployment pipelines and the XMLA endpoint.
Can an existing Power BI shared dataset be used, or is the creation of a new Power
BI dataset appropriate for this solution?
Does an existing shared dataset need to be augmented with new data or measures
to meet additional needs?
Which data storage mode will be most appropriate? Options include Import,
DirectQuery, Composite, or Live Connection.
Should aggregations be used to enhance query performance?
Will creation of a dataflow be useful and can it serve as a source for numerous
datasets?
Will a new gateway data source need to be registered?
Decide where original content will be stored
In addition to planning the target deployment destination, it's also important to plan
where the original—or source—content will be stored, such as:
Specify an approved location for storing the original Power BI Desktop (.pbix) files.
Ideally, this location is available only to people who edit the content. It should
align with how security is set up in the Power BI service.
Use a location for original Power BI Desktop files that includes versioning history or
source control. Versioning permits the content author to revert to a previous file
version, if necessary. OneDrive for work or school or SharePoint work well for this
purpose.
Specify an approved location for storing non-centralized source data, such as flat
files or Excel files. It should be a path that any of the dataset authors can reach
without error and is backed up regularly.
Specify an approved location for content exported from the Power BI service. The
goal is to ensure that security defined in the Power BI service isn't inadvertently
circumvented.
) Important
Tip
Labor costs—salaries and wages—are usually among the highest expenses in most
organizations. Although it can be difficult to accurately estimate, productivity
enhancements have an excellent return on investment (ROI).
Next steps
In the next article in this Power BI migration series, learn about Stage 3, which is
concerned with conducting a proof of concept to mitigate risk and address unknowns as
early as possible when migrating to Power BI.
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Conduct proof of concept to migrate to
Power BI
Article • 02/27/2023
This article describes Stage 3, which is concerned with conducting a proof of concept
(POC) to mitigate risk and address unknowns as early as possible when migrating to
Power BI.
7 Note
The focus of Stage 3 is to address unknowns and mitigate risks as early as possible. A
technical POC is helpful for validating assumptions. It can be done iteratively alongside
solution deployment planning (described in Stage 2).
The output from this stage is a Power BI solution that's narrow in scope, addresses the
initial open questions, and is ready for additional work in Stage 4 to make it production-
ready.
) Important
Most of the topics discussed in this article also apply to a standard Power BI
implementation project. As your organization becomes more experienced with
Power BI, the need to conduct POCs diminishes. However, due to the fast release
cadence with Power BI and the continual introduction of new features, you might
regularly conduct technical POCs for learning purposes.
The POC scope is dependent on what the unknowns are, or which goals need to be
validated with colleagues. To reduce complexity, keep a POC as narrow as possible in
terms of scope.
Most often with a migration, requirements are well known because there's an existing
solution to start from. However, depending on the extent of improvements to be made
or existing Power BI skills, a POC still provides significant value. In addition, rapid
prototyping with consumer feedback may be appropriate to quickly clarify requirements
—especially if enhancements are made.
) Important
Even if a POC includes only a subset of data, or includes only limited visuals, it's
often important to take it from start to finish. That is, from development in Power BI
Desktop to deployment to a development workspace in the Power BI service. It's
the only way to fully accomplish the POC objectives. It's particularly true when the
Power BI service must deliver critical functionality that you haven't used before, like
a DirectQuery dataset using single sign-on. During the POC, focus your efforts on
aspects you're uncertain about or need to verify with others.
Due to its extreme flexibility, there are some aspects about Power BI that may be
fundamentally different from the legacy BI platform you're migrating from.
If you're migrating from a legacy BI platform where reports reference relational data
sources using SQL queries or stored procedures, and if you're planning to use Power BI
in DirectQuery mode, you may be able to achieve close to a one-to-one migration of
the data model.
U Caution
If you see the creation of lots of Power BI Desktop files comprising a single
imported table, it's usually an indicator that the design isn't optimal. Should you
notice this situation, investigate whether the use of shared datasets that are
created using a star schema design could achieve a better result.
1. The legacy dashboard can be recreated as a Power BI report. Most reports are
created with Power BI Desktop. Paginated reports and Excel reports are alternative
options, too.
2. The legacy dashboard can be recreated as a Power BI dashboard. Dashboards are a
visualization feature of the Power BI service. Dashboard visuals are often created
by pinning visuals from one or more reports, Q&A, or Quick Insights.
Tip
Because dashboards are a Power BI content type, refrain from using the word
dashboard in the report or dashboard name.
When recreating report visuals, focus more on the big picture business questions that
are being addressed by the report. It removes the pressure to replicate the design of
every visual in precisely the same way. While content consumers appreciate consistency
when using migrated reports, it's important not to get caught up in time-consuming
debates about small details.
Next steps
In the next article in this Power BI migration series, learn about stage 4, which is
concerned with creating and validating content when migrating to Power BI.
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Create content to migrate to Power BI
Article • 03/14/2023
This article describes Stage 4, which is concerned with creating and validating content
when migrating to Power BI.
7 Note
The focus of Stage 4 is performing the actual work to convert the proof of concept
(POC) to a production-ready solution.
The output from this stage is a Power BI solution that has been validated in a
development workspace and is ready for deployment to production.
Tip
Most of the topics discussed in this article also apply to a standard Power BI
implementation project.
Ideally, from the very beginning, consider decoupling the development work for data
and reports. Decoupling data and reports will facilitate the separation of work, and
permissions, when different people are responsible for data modeling and reports. It
makes for a more scalable approach and encourages data reusability.
Acquire data from one or more data sources (which may be a Power BI dataflow).
Shape, combine, and prepare data.
Create the dataset model, including date tables.
Create and verify model relationships.
Define measures.
Set up row-level security, if necessary.
Configure synonyms and optimize Q&A.
Plan for scalability, performance, and concurrency, which may influence your
decisions about data storage modes, such as using a Composite model or
aggregations.
Tip
7 Note
Many of these decisions will have been made in earlier stages of planning or in the
technical POC.
1. Data accuracy
2. Security
3. Functionality
4. Performance
As part of ongoing data validation efforts, the new report will typically need to be cross-
checked with the original source system. Ideally, this validation occurs in a repeatable
way every time you publish a report change.
Validate security
When validating security, there are two primary aspects to consider:
Data permissions
Access to datasets, reports, and dashboards
In an Import dataset, data permissions are applied by defining row-level security (RLS).
It's also possible that data permissions are enforced by the source system when using
DirectQuery storage mode (possibly with single sign-on).
Tip
Validate functionality
It's the time to double-check dataset details like field names, formatting, sorting, and
default summarization behavior. Interactive report features, such as slicers, drill-down
actions, drillthrough actions, expressions, buttons, or bookmarks, should all be verified,
too.
Validate performance
Performance of the Power BI solution is important for consumer experience. Most
reports should present visuals in under 10 seconds. If you have reports that take longer
to load, pause and reconsider what may be contributing to delays. Report performance
should be assessed regularly in the Power BI service, in addition to Power BI Desktop.
Many performance issues arise from substandard DAX (Data Analysis eXpressions), poor
dataset design, or suboptimal report design (for instance, trying to render too many
visuals on a single page). Technical environment issues, such as the network, an
overloaded data gateway, or how a Premium capacity is configured can also contribute
to performance issues. For more information, see the Optimization guide for Power BI
and Troubleshoot report performance in Power BI.
Dataset documentation
Report documentation
Documentation can be stored wherever it's most easily accessed by the target audience.
Common options include:
Within a SharePoint site: A SharePoint site may exist for your Center of Excellence
or an internal Power BI community site.
Within an app: URLs may be configured when publishing a Power BI app to direct
the consumer to more information.
Within individual Power BI Desktop files: Model elements, like tables and
columns, can define a description. These descriptions appear as tooltips in the
Fields pane when authoring reports.
Tip
If you create a site to serve as a hub for Power BI-related documentation, consider
customizing the Get Help menu with its URL location.
You may also choose to include additional report documentation on a hidden page of
your report. It could include design decisions and a change log.
Next steps
In the next article in this Power BI migration series, learn about stage 5, which is
concerned with deploying, supporting, and monitoring content when migrating to
Power BI.
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Deploy to Power BI
Article • 02/27/2023
This article describes Stage 5, which is concerned with deploying, supporting, and
monitoring content when migrating to Power BI.
7 Note
The primary focus of Stage 5 is to deploy the new Power BI solution to production.
The output from this stage is a production solution ready for use by the business. When
working with an agile method, it's acceptable to have some planned enhancements that
will be delivered in a future iteration. Support and monitoring are also important at this
stage, and on an ongoing basis.
Tip
Except for running in parallel and decommissioning the legacy reports, which are
discussed below, the topics discussed in this article also apply to a standard Power
BI implementation project.
Connection strings and parameters: Adjust dataset connection strings if the data
source differs between development and test. Parameterization can be used to
effectively manage connection strings.
Workspace content: Publish datasets and reports to the test workspace, and create
dashboards.
App. Publish an app using the content from the test workspace, if it will form part
of the UAT process. Usually, app permissions are restricted to a small number of
people involved with UAT.
Data refresh: Schedule the dataset refresh for any Import datasets for the period
when UAT is actively occurring.
Security: Update or verify workspace roles. Testing workspace access includes a
small number of people who are involved with UAT.
7 Note
For more information about options for deployment to development, test, and
production, see Section 9 of the Planning a Power BI enterprise deployment
whitepaper .
The extent to which this UAT process is formal, including written sign-offs, will depend
on your change management practices.
Deploy to production environment
There are several considerations for deploying to the production environment.
Expand permissions in the production workspace, or the app, gradually until all target
users have permission to the new Power BI solution.
Tip
Use the Power BI Activity Log to understand how consumers are adopting and
using the new Power BI solution.
Gateway maintenance: A new data source registration in the data gateway may be
required.
Gateway drivers and connectors: A new proprietary data source may require
installation of a new driver or custom connector on each server in the gateway
cluster.
Create a new Premium capacity: You may be able to use an existing Premium
capacity. Or, there may be situations when a new Premium capacity is warranted. It
could be the case when you purposely wish to separate a departmental workload.
Set up a Power BI dataflow: Data preparation activities can be set up once in a
Power BI dataflow using Power Query Online. It helps avoid replicating data
preparation work in many different Power BI Desktop files.
Register a new organizational visual: Organizational visual registration can be
done in the admin portal for custom visuals that didn't originate from AppSource.
Set featured content: A tenant setting exists that controls who may feature
content in the Power BI service home page.
Set up sensitivity labels: All sensitivity labels are integrated with Microsoft Purview
Information Protection.
At this point, you have reached a big milestone. Celebrate your accomplishment at
completing the migration.
Conduct a retrospective
Consider conducting a retrospective to examine what went well with the migration, and
what could be done better with the next migration.
Run in parallel
In many situations, the new solution will run in parallel to the legacy solution for a
predetermined time. Advantages of running in parallel include:
Here's some questions that can be addressed by reviewing the activity log:
) Important
Be sure to have someone regularly review the activity log. Merely capturing it and
storing the history does have value for auditing or compliance purposes. However,
the real value is when proactive action can be taken.
Having a formal support process, staffed by IT with support tickets, is also essential for
handling routine system-oriented requests and for escalation purposes.
7 Note
The different types of internal and external support are described in the Power BI
adoption roadmap.
You may also have a Center of Excellence (COE) that acts like internal consultants who
support, educate, and govern Power BI in the organization. A COE can be responsible for
curating helpful Power BI content in an internal portal.
Lastly, it should be clear to content consumers to know who to contact with questions
about the content, and to have a mechanism for providing feedback on issues or
improvements.
For more information about user support, with a focus on the resolution of issues, see
Power BI adoption roadmap: User support.
Next steps
In the final article in this series, learn from customers when migrating to Power BI.
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Learn from customer Power BI
migrations
Article • 02/27/2023
This article, which concludes the series on migrating to Power BI, shares key lessons
learned by two customers who have successfully migrated to Power BI.
During the second half of 2018, a formal announcement was made declaring that Power
BI was the approved BI tool for the organization. And, accordingly, all new BI
development work should take place in Power BI. The availability of Power BI Premium
was a key driver for making this decision. At this time, the organization discouraged the
use of the former BI platform, and planning for transition commenced.
Towards the end of 2019, work began to migrate existing content from the legacy BI
platform to Power BI. Some early adopters migrated their content rapidly. That helped
build even more momentum with Power BI around the organization. Content owners
and authors were then asked to begin preparations to fully migrate to Power BI by the
end of 2020. The organization does still face challenges related to skills, time, and
funding—though none of their challenges are related to the technology platform itself.
) Important
Power BI had already become successful and entrenched within the organization
before the business units were asked to undergo a formal migration effort away
from the former BI platform.
Prepare to handle varying responses
In this large decentralized organization, there were varying levels of receptiveness and
willingness to move to Power BI. Beyond concerns related to time and budget, there
were staff who had made significant investments in building their skills in the former BI
platform. So, the announcement about standardizing on Power BI wasn't news
welcomed by everyone. Since each business unit has its own budget, individual business
units could challenge decisions such as this one. As IT tool decisions were made
centrally, that resulted in some challenges for the executive sponsor and BI leaders to
handle.
) Important
Communication with leadership teams throughout the business units was critical to
ensure they all understood the high-level organizational benefits of standardizing
on Power BI. Effective communication became even more essential as the migration
progressed and the decommissioning date of the legacy BI platform approached.
) Important
) Important
It's very easy to overestimate how critical a report actually is. For reports that aren't
used frequently, evaluate whether they can be decommissioned entirely.
Sometimes, the cheapest and easiest thing to do is nothing.
) Important
Although time estimates are often necessary to obtain funding and personnel
assignments, they're probably most valuable in the aggregate.
) Important
Additional responsibility falls to the business units when it's impractical to manage
change from one central team.
Create an internal community
The company established a Center of Excellence (COE) to provide internal training
classes and resources. The COE also serves as an internal consultancy group that's ready
to assist content authors with technical issues, resolution of roadblocks, and best
practice guidance.
There's also an internal Power BI community, which has been a massive success
counting in excess of 1,600 members. The community is managed in Yammer. Members
can ask internally relevant questions and receive answers adhering to best practices and
framed within organizational constraints. This type of user-to-user interaction alleviates
much of the support burden from the COE. However, the COE does monitor the
questions and answers, and it gets involved in conversations when appropriate.
) Important
Have a very well defined scope for what the COE does, such as: adoption,
governance, guidance, best practices, training, support, and perhaps even hands-on
development. While a COE is incredibly valuable, measuring its return on
investment can be difficult.
) Important
) Important
Power BI had many active users across the organization before commencing the
phase out of their legacy BI platform and solutions.
Each of the analytics groups is dedicated to a specific business unit or a shared services
function. A small group may contain a single analyst, while a larger group can have 10-
15 analysts.
) Important
The distributed analytics groups comprise subject matter experts who are familiar
with the day-to-day business needs. This separation allows the central BI team to
focus primarily on technical enablement and support of the BI services and tools.
The empowerment of data analysts within the company resulted in immediate positive
outcomes. However, the initial focus with Power BI development was on visualization.
While it resulted in valuable BI solutions, this focus resulted in a large number of Power
BI Desktop files, each with a one-to-one relationship between the report and its dataset.
It resulted in many datasets and duplication of data and business logic. To reduce
duplication of data, logic, and effort, the company delivered training and provided
support to content authors.
) Important
) Important
Conduct a technical proof of concept to evaluate the model storage mode that
works best. Also, teach data modelers about model storage modes and how they
can choose an appropriate mode for their project.
) Important
Licensing questions often arise. Be prepared to educate and help content authors
to address licensing questions. Validate that user requests for Power BI Pro licenses
are justified.
) Important
Have a plan for creating and managing on-premises data gateways. Decide who is
permitted to install and use a personal gateway and enforce it with gateway
policies.
Layer 1: Intra-team: People learn from, and teach, each other on a day-to-day
basis.
Layer 2: Power BI community: People ask questions of the internal Teams
community to learn from each other and communicate important information.
Layer 3: Central BI team and COE: People submit email requests for assistance.
Office hour sessions are held twice per week to collectively discuss problems and
share ideas.
) Important
Although the first two layers are less formal, they're equally important as the third
layer of support. Experienced users tend to rely mostly on people they know,
whereas newer users (or those who are the single data analyst for a business unit or
shared service) tend to rely more on formal support.
There are now six internal Power BI courses in their internal catalog. The Dashboard in a
Day course remains a popular course for beginners. To help users deepen their skills,
they deliver a series of three Power BI courses and two DAX courses.
) Important
Pay attention to how Premium capacities are used, and how workspaces are
assigned to them.
Next steps
Other helpful resources include:
Microsoft's BI transformation
Planning a Power BI enterprise deployment whitepaper
Dashboard in a Day
Questions? Try asking the Power BI Community
Suggestions? Contribute ideas to improve Power BI
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Power BI adoption roadmap
Article • 02/27/2023
The goal of this series of articles is to provide a roadmap. The roadmap presents a series
of strategic and tactical considerations and action items that directly lead to successful
Power BI adoption, and help build a data culture in your organization.
Advancing adoption and cultivating a data culture is about more than implementing
technology features. Technology can assist an organization in making the greatest
impact, but a healthy data culture involves a lot of considerations across the spectrum of
people, processes, and technology.
7 Note
While reading this series of articles, it's recommended you also take into
consideration Power BI implementation planning guidance. After you're familiar
with the concepts in the Power BI adoption roadmap, consider reviewing the usage
scenarios. Understanding the diverse ways how Power BI is used can influence your
implementation strategies and decisions.
This series of articles correlates with the following Power BI adoption roadmap diagram:
The areas in the above diagram include:
Area Description
Data culture: Data culture refers to a set of behaviors and norms in the organization that
encourages a data-driven culture. Building a data culture is closely related to adopting
Power BI, and it's often a key aspect of an organization's digital transformation.
Content ownership and management: There are three primary strategies for how
business intelligence (BI) content is owned and managed: business-led self-service BI,
managed self-service BI, and enterprise BI. These strategies have a significant influence on
adoption, governance, and the Center of Excellence (COE) operating model.
Area Description
Content delivery scope: There are four primary strategies for content delivery including
personal BI, team BI, departmental BI, and enterprise BI. These strategies have a significant
influence on adoption, governance, and the COE operating model.
Center of Excellence: A Power BI COE is an internal team of technical and business experts.
These experts actively assist others who are working with data within the organization. The
COE forms the nucleus of the broader community to advance adoption goals that are
aligned with the data culture vision.
Governance: Data governance is a set of policies and procedures that define the ways in
which an organization wants data to be used. When adopting Power BI, the goal of
governance is to empower the internal user community to the greatest extent possible,
while adhering to industry, governmental, and contractual requirements and regulations.
Mentoring and user enablement: A critical objective for adoption efforts is to enable
users to accomplish as much as they can within the guardrails established by governance
guidelines and policies. The act of mentoring users is one of the most important
responsibilities of the COE. It has a direct influence on adoption efforts.
User support: User support includes both informally organized, and formally organized,
methods of resolving issues and answering questions. Both formal and informal support
methods are critical for adoption.
The relationships in the diagram shown above can be summarized in the following
bullet list:
Your organizational data culture vision will strongly influence the strategies that
you follow for self-service and enterprise content ownership and management
and content delivery scope.
These strategies will, in turn, have a big impact on the operating model for your
Center of Excellence and governance decisions.
The established governance guidelines, policies, and processes affect the
implementation methods used for mentoring and enablement, the community of
practice, and user support.
Governance decisions will dictate the day-to-day system oversight (administration)
activities.
All data culture and adoption-related decisions and actions are accomplished more
easily with guidance and leadership from an executive sponsor.
Each individual article in this series discusses key topics associated with the items in the
diagram. Considerations and potential action items are provided. Each article concludes
with a set of maturity levels to help you assess your current state so you can decide
what action to take next.
Power BI adoption
Successful Power BI adoption involves making effective processes, support, tools, and
data available and integrated into regular ongoing patterns of usage for content
creators, consumers, and stakeholders in the organization.
) Important
7 Note
The remaining articles in this Power BI adoption series discuss the following aspects of
adoption.
) Important
You may be wondering how this Power BI adoption roadmap is different from the
Power BI adoption framework . The adoption framework was created primarily to
support Microsoft partners. It is a lightweight set of resources to help partners
deploy Power BI solutions for their customers.
This Power BI adoption series is more current. It is intended to guide any person or
organization that is using—or considering using—Power BI. If you're seeking to
improve your existing Power BI implementation, or planning a new Power BI
implementation, this adoption roadmap is a great place to start. You will find a lot
of valuable information in the Power BI adoption framework , so we encourage
you to review it.
Target audience
The intended audience of this series of articles is interested in one or more of the
following outcomes.
Primarily, this series of articles will be helpful to those who work in an organization with
one or more of the following characteristics.
To fully benefit from the information provided in these articles, it's an advantage to have
at least an understanding of Power BI fundamental concepts.
Next steps
In the next article in this series, learn about the Power BI adoption maturity levels. The
maturity levels are referenced throughout the entire series of articles. Also, see the
conclusion article for additional adoption-related resources.
Experienced Power BI partners are available to help your organization succeed with
adoption of Power BI. To engage a Power BI partner, visit the Power BI partner portal .
Acknowledgments
This series of articles was written by Melissa Coates, Data Platform MVP, and owner of
Coates Data Strategies , with significant contributions from Matthew Roche. Reviewers
include Cory Moore, James Ward, Timothy Bindas, Greg Moir, Chuy Varela, Daniel
Rubiolo, Sanjay Raut, and Peter Myers.
Power BI adoption roadmap maturity
levels
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
There are three inter-related perspectives to consider when adopting a technology like
Power BI.
Type Description
User adoption: User adoption is the extent to which consumers and creators continually
increase their knowledge. It's concerned with whether they're actively using Power BI, and
whether they're using it in the most effective way.
Type Description
Solution adoption: Solution adoption refers to the impact and business value achieved for
individual requirements and Power BI solutions.
As the four arrows in the previous diagram indicate, the three types of adoption are all
strongly inter-related:
The remainder of this article introduces the three types of Power BI adoption in more
detail.
It's helpful to think about organizational adoption from the perspective of a maturity
model. For consistency with the Power CAT adoption maturity model and the maturity
model for Microsoft 365, this Power BI adoption roadmap aligns with the five levels from
the Capability Maturity Model , which were later enhanced by the Data Management
Maturity (DMM) model from ISACA (note the DMM was a paid resource that has been
retired).
Every organization has limited time, funding, and people. So, it requires them to be
selective about where they prioritize their efforts. To get the most from your investment
in Power BI, seek to attain at least maturity level 300 or 400, as discussed below. It's
common that different business units in the organization evolve and mature at different
rates, so be cognizant of the organizational state as well as progress for key business
units.
7 Note
Pockets of success and experimentation with Power BI exist in one or more areas of
the organization.
Achieving quick wins has been a priority, and it has delivered some successes.
Organic growth has led to the lack of a coordinated strategy or governance
approach.
Practices are undocumented, with significant reliance on tribal knowledge.
There are few formal processes in place for effective data management.
Risk exists due to a lack of awareness of how data is used throughout the
organization.
The potential for a strategic investment with Power BI is acknowledged, but there's
no clear path forward for purposeful, organization-wide execution.
Certain Power BI content is now critical in importance and/or it's broadly used by
the organization.
There are attempts to document and define repeatable practices, however efforts
are siloed, reactive, and deliver varying levels of success.
There's an over-reliance on individuals having good judgment and adopting
healthy habits that they learned on their own.
Power BI adoptions continues to grow organically and produces value. However, it
takes place in an uncontrolled way.
Resources for an internal community are established, such as a Teams channel or
Yammer group.
Initial planning for a consistent Power BI governance strategy is underway.
There's recognition that a Power BI Center of Excellence (COE) can deliver value.
7 Note
The above characteristics are generalized. When considering maturity levels and
designing a plan, you'll want to consider each topic or goal independently. In
reality, it's probably not possible to reach level 500 maturity level for every aspect
of Power BI adoption for the entire organization. So, assess maturity levels
independently per goal. That way, you can prioritize your efforts where they will
deliver the most value. The remainder of the articles in this Power BI adoption
series present maturity levels on a per-topic basis.
7 Note
User adoption encompasses how consumers view content, as well as how self-service
creators generate content for others to consume.
User adoption occurs on an individual user basis, but it's measured and analyzed in the
aggregate. Individual users progress through the four stages of user adoption at their
own pace. An individual who adopts a new technology will take some time to achieve
proficiency. Some users will be eager; others will be reluctant to learn yet another tool,
regardless of the promised productivity improvements. Advancing through the user
adoption stages involves time and effort, and it involves behavioral changes to become
aligned with organizational adoption objectives. The extent to which the organization
supports users advancing through the user adoption stages has a direct correlation to
the organizational-level adoption maturity.
An individual has heard of, or been initially exposed to, Power BI in some way.
An individual may have access to Power BI but isn't yet actively using it.
It's easy to underestimate the effort it takes to progress from stage 2 (understanding) to
stage 4 (proficiency). Typically, it takes the longest time to progress from stage 3
(momentum) to stage 4 (proficiency).
) Important
By the time a user reaches the momentum and proficiency stages, the organization
needs to be ready to support them in their efforts. You can consider some proactive
efforts to encourage users to progress through stages. For more information, see
the community of practice and the user support articles.
Tip
Exploration and experimentation are the main approaches to testing out new
ideas. Exploration of new ideas can occur through informal self-service BI, or
through a formal proof of concept (POC), which is purposely narrow in scope. The
goal is to confirm requirements, validate assumptions, address unknowns, and
mitigate risks.
A small group of users test the proof of concept solution and provide useful
feedback.
All exploration—and initial feedback—could occur within Power BI Desktop or
Excel. Use of the Power BI service is limited.
The solution is functional and meets the basic set of user requirements. There are
likely plans to iterate on improvements and enhancements.
The solution is deployed to the Power BI service.
All necessary supporting components are in place, such as gateways to support
scheduled refresh.
Users are aware of the solution and show interest in using it. Potentially, it may be
a limited preview release, and may not yet be ready to promote to a production
workspace.
Solution phase 3 – Valuable
Common characteristics of phase 3 solution adoption include:
Target users find the solution is valuable and experience tangible benefits.
The solution is promoted to a production workspace.
Validations and testing occur to ensure data quality, accurate presentation,
accessibility, and acceptable performance.
Content is endorsed, when appropriate.
Usage metrics for the solution are actively monitored.
User feedback loops are in place to facilitate suggestions and improvements that
can contribute to future releases.
Solution documentation is generated to support the needs of information
consumers (such as data sources used or how metrics are calculated), and help
future creators (such as documenting any future maintenance or planned
enhancements).
Ownership and subject matter experts for the content is clear.
Report branding and theming are in place, and they're inline with governance
guidelines.
Target users actively and routinely use the solution, and it's considered essential
for decision-making purposes.
The solution resides in a production workspace well-separated from development
and test content. Change management and release management are carefully
controlled due to the impact of changes.
A subset of users regularly provides feedback to ensure the solution continues to
meet requirements.
Expectations for the success of the solution are clear and are measured.
Expectations for support of the solution are clear, especially if there are service
level agreements.
The solution aligns with organizational governance guidelines and practices.
Most content is certified since it's critical in nature.
Formal user acceptance testing for new changes may occur, particularly for IT-
managed content.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about the
organizational data culture and its impact on adoption efforts.
Power BI adoption roadmap: Data
culture
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
Building a data culture is closely related to adopting Power BI, and it's often a key aspect
of an organization's digital transformation. The term data culture can be defined in
different ways by different organizations. In this series of articles, data culture means a
set of behaviors and norms in an organization. It encourages a culture that regularly
employs informed data decision-making:
) Important
Think of data culture as what you do, not what you say. Your data culture is not a
set of rules (that's governance). So, data culture is a somewhat abstract concept. It's
the behaviors and norms that are allowed, rewarded, and encouraged—or those
that are disallowed and discouraged. Bear in mind that a healthy data culture
motivates employees at all levels of the organization to generate and distribute
actionable knowledge.
Within an organization, certain business units or teams are likely to have their own
behaviors and norms for getting things done. The specific ways to achieve data culture
objectives can vary across organizational boundaries. What's important is that they
should all align with the organizational data culture objectives. You can think of this
structure as aligned autonomy.
The following circular diagram conveys the interrelated aspects that influence your data
culture:
The diagram represents the somewhat ambiguous relationships among the following
items:
Data culture is the outer circle. All topics within it contribute to the state of the
data culture.
Organizational adoption (including the implementation aspects of mentoring and
user enablement, user support, community of practice, governance, and system
oversight) is the inner circle. All topics are major contributors to the data culture.
Executive support and the Center of Excellence are drivers for the success of
organizational adoption.
Data literacy, data democratization, and data discovery are data culture aspects
that are heavily influenced by organizational adoption.
Content ownership, content management, and content delivery scope are closely
related to data democratization.
The elements of the diagram are discussed throughout this series of articles.
Data culture outcomes aren't specifically mandated. Rather, the state of the data culture
is the result of following the governance rules as they're enforced (or the lack of
governance rules). Leaders at all levels need to actively demonstrate what's important
through their actions, including how they praise, recognize, and reward staff members
who take initiative.
Tip
If you can take for granted that your efforts to develop a data solution (such as a
dataset or a report) will be valued and appreciated, that's an excellent indicator of a
healthy data culture. Sometimes, however, it depends on what your immediate
manager values most.
The initial motivation for establishing a data culture often comes from a specific
strategic business problem or initiative. It might be:
In any of these situations, there's often a specific area where the data culture takes root.
The specific area could be a scope of effort that's smaller than the entire organization,
even if it's still significant. After necessary changes are made at this smaller scope, they
can be incrementally replicated and adapted for the rest of the organization.
Although technology can help advance the goals of a data culture, implementing
specific tools or features isn't the objective. This series of articles covers a lot of topics
that contribute to adoption of a healthy data culture. The remainder of this article
addresses three essential aspects of data culture: data discovery, data democratization,
and data literacy.
Data discovery
A successful data culture depends on users working with the right data in their day-to-
day activities. To achieve this goal, users need to find and access data sources, reports,
and other items.
Data discovery is the ability to effectively search for, and locate, relevant data sources
and reports across the organization. Primarily, data discovery is concerned with
improving awareness that data exists, particularly when data is siloed in departmental
systems. After a user is aware of the data's existence, that user can go through the
standard process to request access to the information. Today, technology helps a lot
with data discovery, advancing well past asking colleagues where to find datasets.
Tip
It's important to have a clear and simple process so users can request access to
data. Knowing that a dataset exists—but being unable to access it within the
guidelines and processes that the domain owner has established—can be a source
of frustration for users. It can force them to use inefficient workarounds instead of
requesting access through the proper channels.
In Power BI, the data hub and the use of endorsements help promote data discovery of
shared datasets. They also encourage self-service creators to reuse and augment
datasets.
Further, data catalog solutions are extremely valuable for data discovery. They can
record metadata tags and descriptions to provide deeper context and meaning. For
example, Azure Purview can scan and catalog an entire Power BI tenant.
Data democratization
Data democratization refers to putting data into the hands of more users who are
responsible for solving business problems. It's about enabling them to make decisions
with the data.
7 Note
The concept of data democratization does not imply a lack of security or a lack of
justification based on job role. As part of a healthy data culture, data
democratization helps reduce shadow IT by providing datasets that:
2 Warning
Data literacy
Data literacy refers to the ability to interpret, create, and communicate data accurately
and effectively.
Training efforts, as described in the mentoring and user enablement article, often focus
on how to use the technology itself. Technology skills are important to producing high-
quality solutions, but it's also important to consider how to purposely advance data
literacy throughout the organization. Put another way, successful adoption takes a lot
more than merely providing Power BI software and licenses to users.
How you go about improving data literacy in your organization depends on many
factors, such as current user skillsets, complexity of the data, and the types of analytics
that are required. You can focus on these activities related to data literacy:
Tip
Getting the right stakeholders to agree on the problem is usually the first step.
Then, it's a matter of getting the stakeholders to agree on the strategic approach to
a solution, along with the solution details.
Checklist - Here are some considerations and key actions that you can take to
strengthen your data culture.
" Align on data culture goals and strategy: Give serious consideration to the type of
data culture that you want to cultivate. Ideally, it's more from a position of user
empowerment than a position of command and control.
" Understand your current state: Talk to stakeholders in different business units to
understand which analytics practices are currently working well and which practices
aren't working well for data-driven decision-making. Conduct a series of workshops
to understand the current state and to formulate the desired future state.
" Speak with stakeholders: Talk to stakeholders in IT, BI, and the COE to understand
which governance constraints need consideration. These talks can present an
opportunity to educate teams on topics like security and infrastructure. You can also
use the opportunity to educate them on what Power BI actually is (and how it
includes powerful data preparation and modeling capabilities, in addition to being
a vizualiation tool).
" Verify executive sponsorship: Verify the level of executive sponsorship and support
that you have in place to advance data culture goals.
" Make purposeful decisions about your BI strategy: Decide what the ideal balance
of business-led self-service BI, managed self-service BI, and enterprise BI should be
for the key business units in the organization (covered in the content ownership
and management article). Also consider how the strategy relates to the extent of
published content for personal BI, team BI, departmental BI, and enterprise BI
(described in the content delivery scope article). Determine how these decisions
affect your action plan.
" Create an action plan: Begin creating an action plan for immediate, short-term, and
long-term action items. Identify business groups and problems that represent
"quick wins" and can make a visible difference.
" Create goals and metrics: Determine how you'll measure effectiveness for your
data culture initiatives. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate the results of your efforts.
Maturity levels
The following maturity levels will help you assess the current state of your data culture.
100: Initial The enterprise BI team can't keep up with the needs of the business. A significant
backlog of requests exists for the enterprise BI team.
There's a lack of oversight and visibility into self-service BI activities. The successes
or failures of BI solutions aren't well understood.
200: Multiple teams have had measurable successes with self-service BI solutions. People
Repeatable in the organization are starting to pay attention.
Investments are being made to identify the ideal balance of enterprise BI and self-
service BI.
300: Specific goals are established for advancing the data culture. These goals are
Defined implemented incrementally.
400: The data culture goals to employ informed decision-making are aligned with
Capable organizational objectives. They're actively supported by the executive sponsor, the
COE, and they have a direct impact on adoption strategies.
A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared goals.
Individuals who take initiative in building valuable BI solutions are recognized and
rewarded.
Level State of data culture
500: The business value of BI solutions is regularly evaluated and measured. KPIs or OKRs
Efficient are used to track data culture goals and the results of BI efforts.
Feedback loops are in place, and they encourage ongoing data culture
improvements.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about the
importance of an executive sponsor.
Power BI adoption roadmap: Executive
sponsorship
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
When planning to advance the data culture and the state of organizational adoption for
Power BI, it's crucial to have executive support. An executive sponsor is imperative
because adopting Power BI is far more than just a technology project.
) Important
The ideal executive sponsor has sufficient credibility, influence, and authority
throughout the organization.
Identifying an executive sponsor
There are multiple ways to identify an executive sponsor.
Top-down pattern
An executive sponsor may be selected by a more senior executive. For example, the
Chief Executive Officer (CEO) may hire a Chief Data Officer (CDO) or Chief Analytics
Officer (CAO) to explicitly advance the organization's data culture objectives or lead
digital transformation efforts. The CDO or CAO then becomes the ideal candidate to
serve as the executive sponsor for Power BI (or analytics in general).
Here's another example: The CEO may empower an existing executive, such as the Chief
Financial Officer (CFO), because they have a good track record leading data and
analytics in their organization. As the new executive sponsor, the CFO could then lead
efforts to replicate the finance team's success to other areas of the organization.
7 Note
Bottom-up pattern
Alternatively, a candidate for the executive sponsor role could emerge due to the
success they've experienced with creating BI solutions. For example, a business unit
within the organization, such as Finance, has organically achieved great success with
their use of data and analytics. Essentially, they've successfully formed their own data
culture on a small scale. A junior-level leader who hasn't reached the executive level
(such as a director) may then grow into the executive sponsor role by sharing successes
with other business units across the organization.
The success for a leader using the bottom-up pattern depends on being recognized by
senior leadership.
With a bottom-up approach, the sponsor may be able to make some progress, but they
won't have formal authority over other business units. Without clear authority, it's only a
matter of time until challenges occur that are beyond their level of authority. For this
reason, the top-down approach has a higher probability of success. However, initial
successes with a bottom-up approach can convince leadership to increase their level of
sponsorship, which may start a healthy competition across other business units in the
adoption of BI.
Checklist - Here's a list of considerations and key actions you can take to establish or
strengthen executive support for Power BI.
Maturity levels
The following maturity levels will help you assess your current state of executive
support.
100: Initial There may be awareness from at least one executive about the strategic importance
of how Power BI can play a part in advancing the organization's data culture goals.
However, neither a Power BI sponsor nor an executive-level decision-maker is
identified.
200: Informal executive support exists for Power BI through informal channels and
Repeatable relationships.
300: An executive sponsor is identified. Expectations are clear for the role.
Defined
400: An executive sponsor is well established with someone with sufficient authority
Capable across organizational boundaries.
A healthy and productive partnership exists between the executive sponsor, COE,
business units, and IT. The teams are working towards shared data culture goals.
500: The executive sponsor is highly engaged. They're a key driver for advancing the
Efficient organization's data culture vision.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about content
ownership and management, and its effect on business-led self-service BI, managed
self-service BI, and enterprise BI.
Power BI adoption roadmap: Business
alignment
Article • 09/11/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
Business intelligence (BI) activities and solutions have the best potential to deliver value
when they're well aligned to organizational business goals. In general, effective business
alignment helps to improve adoption. With effective business alignment, the data
culture and BI strategy enable business users to achieve their business objectives.
You can achieve effective business alignment with analytics activities and BI solutions by
having:
Improved adoption, because content consumers are more likely to use solutions
that enable them to achieve their objectives.
Increased business return on investment (ROI) for analytics initiatives and
solutions, because these initiatives and solutions will be more likely to directly
advance progress toward business goals.
Less effort and fewer resources spent on change management and changing
business requirements, due to an improved understanding of business data needs.
Communication alignment
Effective and consistent communication is critical to aligning processes. Consider the
following actions and activities when you want to improve communication for successful
business alignment.
Make and follow a plan for central teams and the user community to follow.
Plan regular alignment meetings between different teams and groups. For
example, central teams can plan regular planning and priority alignments with
business units. Another example are when central teams schedule regular meetings
to mentor and enable self-service users.
Setup a centralized portal to consolidate communication and documentation for
user communities. For strategic solutions and initiatives, consider using a
communication hub.
Limit complex business and technical terminology in cross-functional
communications.
Strive for concise communication and documentation that's formatted and well
organized. That way, people can easily find the information that they need.
Consider maintaining a visible roadmap that shows the planned BI solutions and
activities relevant to the user community in the next quarter.
Be transparent when communicating policies, decisions, and changes.
Create a process for people to provide feedback, and review that feedback
regularly as part of regular planning activities.
) Important
Strategic alignment
Your business strategy should be well aligned with your BI strategy. To incrementally
achieve this alignment, we recommend that you commit to follow structured, iterative
planning processes.
Strategic planning: Define BI goals and priorities based on the business strategy
and current state of BI adoption and implementation. Typically, strategic planning
occurs every 12-18 months to iteratively define high-level desired outcomes. You
should synchronize strategic planning with key business planning processes.
Tactical planning: Define objectives, action plans, and a backlog of solutions that
help you to achieve your BI goals. Typically, tactical planning occurs quarterly to
iteratively re-evaluate and align the BI strategy and activities to the business
strategy. This alignment is informed by business feedback and changes to business
objectives or technology. You should synchronize tactical planning with key project
planning processes.
Solution planning: Design, develop, test, and deploy BI solutions that support
content creators and consumers in achieving their business objectives. Both
centralized content creators and self-service content creators conduct solution
planning to ensure that the solutions they create are well aligned with business
objectives. You should synchronize solution planning with key adoption and
governance planning processes.
) Important
U Caution
A governance strategy that's poorly aligned with business objectives can result in
more conflicts and compliance risk, because users might pursue workarounds to
complete their tasks.
Executive alignment
Executive leadership plays a key role in defining the business strategy and business
goals. To this end, executive engagement is an important part of achieving top-down
business alignment.
To achieve executive alignment, consider the following key considerations and activities.
Work with your executive sponsor to organize short, quarterly executive feedback
sessions about the use of BI in the organization. Use this feedback to identify
changes in business objectives, re-assess the BI strategy, and inform future actions
to improve business alignment.
Schedule regular alignment meetings with the executive sponsor to promptly
identify any potential changes in the business strategy or data needs.
Deliver monthly executive summaries that highlight relevant information,
including:
Key performance indicators (KPIs) that measure progress toward BI goals.
Power BI adoption and implementation milestones.
Technology changes that may impact organizational business goals.
) Important
Don't underestimate the importance of the role your executive sponsor has in
achieving and maintaining effective business alignment.
Maintain business alignment
Business alignment is a continual process. To maintain business alignment, consider the
following factors.
Assign a responsible team: A working team reviews feedback and organizes re-
alignment sessions. This team is responsible for the alignment of planning and
priorities between the business and BI strategy.
Create and support a feedback process: Your user community requires the means
to provide feedback. Examples of feedback can include requests to change existing
solutions, or to create new solutions and initiatives. This feedback is essential for
bottom-up business user alignment, and it drives iterative and continuous
improvement cycles.
Measure the success of business alignment: Consider using surveys, sentiment
analysis, and usage metrics to assess the success of business alignment. When
combined with other concise feedback mechanisms, this can provide valuable
input to help define future actions and activities to improve business alignment
and Power BI adoption.
Schedule regular re-alignment sessions: Ensure that BI strategic planning and
tactical planning occur alongside relevant business strategy planning (when
business leadership review business goals and objectives).
7 Note
) Important
Questions to ask
Use questions like those found below to assess business alignment.
Can people articulate the goals of the organization and the business objectives of
their team?
To what extent do descriptions of organizational goals align across the
organization? How do they align between the business user community and
leadership community? How do they align between business teams and technical
teams?
Does executive leadership understand the strategic importance of data in
achieving business objectives? Does the user community understand the strategic
importance of data in helping them succeed in their jobs?
Are changes in the business strategy reflected promptly in changes to the BI
strategy?
Are changes in business user data needs addressed promptly in BI solutions?
To what extent do data policies support or conflict with existing business processes
and the way that users work?
Do solution requirements focus more on technical features than addressing
business questions? Is there a structured requirements gathering process? Do
content owners and creators interact effectively with stakeholders and content
consumers during requirements gathering?
How are decisions about data or BI investments made? Who makes these
decisions?
How well do people trust existing data and BI solutions? Is there a single version of
truth, or are there regular debates about who has the correct version?
How are BI initiatives and strategy communicated across the organization?
Maturity levels
100: Initial • Business and BI strategies lack formal alignment, which leads to reactive
implementation and misalignment between data teams and business users.
200: • There are efforts to align BI initiatives with specific data needs without a
Repeatable consistent approach or understanding of their success.
300: • BI initiatives are prioritized based on their alignment with strategic business
Defined objectives. However, alignment is siloed and typically focuses on local needs.
• Strategic initiatives and changes have a clear, structured involvement of both the
business and BI strategic decision makers. Business teams and technical teams can
have productive discussions to meet business and governance needs.
• Regular and iterative strategic alignments occur between the business and
technical teams. Changes to the business strategy result in clear actions that are
reflected by changes to the BI strategy to better support business needs.
500: • The BI strategy and the business strategy are fully integrated. Continuous
Efficient improvement processes drive consistent alignment, and they are themselves data
driven.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about content
ownership and management, and its effect on business-led self-service BI, managed
self-service BI, and enterprise BI.
Power BI adoption roadmap: Content
ownership and management
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
7 Note
There are three primary strategies for how business intelligence (BI) content is owned
and managed: business-led self-service BI, managed self-service BI, and enterprise BI.
For the purposes of this series of articles, the term content refers to any type of data
item (like a report or dashboard). It's synonymous with solution.
The organization's data culture is the driver for why, how, and by whom each of these
three content ownership strategies is implemented.
Area Description
Area Description
Business-led self-service BI: All content is owned and managed by the creators and
subject matter experts within a business unit. This ownership strategy is also known as a
decentralized or bottom-up BI strategy.
Managed self-service BI: The data is owned and managed by a centralized team, whereas
business users take responsibility for reports and dashboards. This ownership strategy is
also known as discipline at the core and flexibility at the edge.
Enterprise BI: All content is owned and managed by a centralized team such as IT,
enterprise BI, or the Center of Excellence (COE).
It's unlikely that an organization operates exclusively with one content ownership and
management strategy. Depending on your data culture, one strategy might be far more
dominant than the others. The choice of strategy could differ from solution to solution,
or from team to team. In fact, a single team can actively use multiple strategies if it's
both a consumer of enterprise BI content and a producer of its own self-service content.
The strategy to pursue depends on factors such as:
How content is owned and managed has a significant effect on governance, the extent
of mentoring and user enablement, needs for user support, and the COE operating
model.
As discussed in the governance article, the level of governance and oversight depends
on:
As stated in the adoption maturity levels article, organizational adoption measures the
state of data management processes and governance. The choices made for content
ownership and management significantly affect how organizational adoption is
achieved.
Role Description
Data Responsible for defining and/or managing acceptable data quality levels as well as
steward master data management (MDM).
Subject Responsible for defining what the data means, what it's used for, who may access it,
matter and how the data is presented to others. Collaborates with domain owner as needed
expert and supports colleagues in their use of data.
(SME)
Technical Responsible for creating, maintaining, publishing, and securing access to data and
owner reporting items.
7 Note
Be clear about who is responsible for managing data items. It's crucial to ensure a
good experience for content consumers. Specifically, clarity on ownership is helpful
for:
In the Power BI service, content owners can set the contact list property for many
types of items. The contact list is also used in security workflows. For example,
when a user is sent a URL to open an app but they don't have permission, they will
be presented with an option to make a request for access.
The remainder of this article covers considerations related to the three content
ownership and management strategies.
Business-led self-service BI
With business-led self-service BI, all content is owned and managed by creators and
subject matter experts. Because responsibility is retained within a business unit, this
strategy is often described as the bottom-up, or decentralized, approach. Business-led
self-service BI is often a good strategy for personal BI and team BI solutions.
) Important
The concept of business-led self-service BI is not the same as shadow IT. In both
scenarios, BI content is created, owned, and managed by business users. However,
shadow IT implies that the business unit is circumventing IT and so the solution is
not sanctioned. With business-led self-service BI solutions, the business unit has full
authority to create and manage content. Resources and support from the COE are
available to self-service content creators. It's also expected that the business unit
complies with all established data governance guidelines and policies.
Decentralized data management aligns with the organization's data culture, and
the organization is prepared to support these efforts.
Data exploration and freedom to innovate is a high priority.
The business unit wants to have the most involvement and retain the highest level
of control.
The business unit has skilled people capable of—and fully committed to—
supporting solutions through the entire lifecycle. It covers all types of Power BI
items, including the data (dataflows and datasets), the visuals (reports and
dashboards), and apps.
The flexibility to respond to changing business conditions and react quickly
outweighs the need for stricter governance and oversight.
Teach your creators to use the same techniques that IT would use, like shared
datasets and dataflows. Having fewer duplicated datasets reduces maintenance,
improves consistency, and reduces risk.
Focus on providing mentoring, training, resources, and documentation (described
in the mentoring and user enablement article). The importance of these efforts
can't be overstated. Be prepared for skill levels of self-service content creators to
vary significantly. It's also common for a solution to deliver excellent business value
yet be built in such a way that it won't scale or perform well over time (as historic
data volumes increase). Having the COE available to help when these situations
arise is very valuable.
Provide guidance on the best way to use endorsements. The promoted
endorsement is for content produced by self-service creators. Consider reserving
use of the certified endorsement for enterprise BI content and managed self-
service BI content (discussed next).
Analyze the activity log to discover situations where the COE could proactively
contact self-service owners to offer helpful information. It's especially useful when
a suboptimal usage pattern is detected. For example, log activity could reveal
overuse of individual item sharing when an app or workspace roles may be a
better choice. The data from the activity log allows the COE to offer support and
advice to the business units. In turn, this information can help increase the quality
of solutions, while allowing the business to retain full ownership and control of
their content.
Managed self-service BI
Managed self-service BI is a blended approach. The data is owned and managed by a
centralized team (such as IT, enterprise BI, or the COE), while responsibility for reports
and dashboards belongs to creators and subject matter experts within the business
units. Managed self-service BI is frequently a good strategy for team BI and
departmental BI solutions.
This approach is often called discipline at the core and flexibility at the edge. It's because
the data architecture is maintained by a single team with an appropriate level of
discipline and rigor. Business units have the flexibility to create reports and dashboards
based on centralized data. This approach allows report creators to be far more efficient
because they can remain focused on delivering value from their data analysis and
visuals.
Teach users to separate model and report development. They can use live
connections to create reports based on existing datasets. When the dataset is
decoupled from the report, it promotes data reuse by many reports and many
authors. It also facilitates the separation of duties.
Use dataflows to centralize data preparation logic and to share commonly used
data tables—like date, customer, product, or sales—with many dataset creators.
Refine the dataflow as much as possible, using friendly column names and correct
data types to reduce the downstream effort required by dataset authors, who
consume the dataflow as a source. Dataflows are an effective way to reduce the
time involved with data preparation and improve data consistency across datasets.
The use of dataflows also reduces the number of data refreshes on source systems
and allows fewer users who require direct access to source systems.
When self-service creators need to augment an existing dataset with departmental
data, educate them to use DirectQuery connections to Power BI datasets and Azure
Analysis Services. This feature allows for an ideal balance of self-service
enablement while taking advantage of the investment in data assets that are
centrally managed.
Use the certified endorsement for datasets and dataflows to help content creators
identify trustworthy sources of data.
Include consistent branding on all reports to indicate who produced the content
and who to contact for help. Branding is particularly helpful to distinguish content
that is produced by self-service creators. A small image or text label in the report
footer is valuable when the report is exported from the Power BI service.
Consider implementing separate workspaces for storing data and reports. This
approach allows for better clarity on who is responsible for content. It also allows
for more restrictive workspace roles assignments. That way, report creators can
only publish content to their reporting workspace; and, read and build dataset
permissions allow creators to create new reports with row-level security (RLS) in
effect, when applicable.
Use the Power BI REST APIs to compile an inventory of Power BI items. Analyze the
ratio of datasets to reports to evaluate the extent of dataset reuse.
Enterprise BI
Enterprise BI is a centralized approach in which all content is owned and managed by a
centralized team. This team is usually IT, enterprise BI, or the COE.
Centralizing content management with a single team aligns with the organization's
data culture.
The organization has BI expertise to manage all the BI items end-to-end.
The content needs of consumers are well-defined, and there's little need to
customize or explore data beyond the reporting solution that's delivered.
Content ownership and direct access to data needs to be limited to a few people.
The data is highly sensitive or subject to regulatory requirements.
Implement a rigorous process for use of the certified endorsement for datasets,
reports, and apps. Not all enterprise BI content needs to be certified, but much of
it probably should be. Certified content should indicate that data quality has been
validated. Certified content should also follow change management rules, have
formal support, and be fully documented. Because certified content has passed
rigorous standards, the expectations for trustworthiness are higher.
Include consistent branding on enterprise BI reports to indicate who produced the
content, and who to contact for help. A small image or text label in the report
footer is valuable when the report is exported from the Power BI service.
If you use specific report branding to indicate enterprise BI content, be careful with
the save a copy functionality that would allow a user to download a copy of a
report and personalize it. Although this functionality is an excellent way to bridge
enterprise BI with managed self-service BI, it dilutes the value of the branding. A
more seamless solution is to provide a separate Power BI Desktop template file for
self-service authors. The template defines a starting point for report creation with a
live connection to an existing dataset, and it doesn't include branding. The
template file can be shared as a link within a Power BI app, or from the community
site.
Ownership transfers
Occasionally, the ownership of a particular solution may need to be transferred to
another team. An ownership transfer from a business unit to a centralized team can
happen when:
The COE should have well-documented procedures for identifying when a solution is a
candidate for ownership transfer. It's very helpful if help desk personnel know what to
look for as well. Having a customary pattern for self-service creators to build and grow a
solution, and hand it off in certain circumstances, is an indicator of a productive and
healthy data culture. A simple ownership transfer may be addressed during COE office
hours; a more complex transfer may warrant a small project managed by the COE.
7 Note
There's potential that the new owner will need to do some refactoring before
they're willing to take full ownership. Refactoring is most likely to occur with the
less visible aspects of data preparation, data modeling, and calculations. If there are
any manual steps or flat file sources, it's an ideal time to apply those
enhancements. The branding of reports and dashboards may also need to change,
for example, if there's a footer indicating report contact or a text label indicating
that the content is certified.
It's also possible for a centralized team to transfer ownership to a business unit. It could
happen when:
The team with domain knowledge is better equipped to own and manage the
content going forward.
The centralized team has created the solution for a business unit that doesn't have
the skills to create it from scratch, but it can maintain and extend the solution
going forward.
Tip
Don't forget to recognize and reward the work of the original creator, particularly if
ownership transfers are a common occurrence.
Maturity levels
The following maturity levels will help you assess the current state of your content
ownership and management.
Level State of Power BI content ownership and management
100: Initial Self-service content creators own and manage content in an uncontrolled way,
without a specific strategy.
A high ratio of datasets to reports exists. When many datasets only support one
report, it indicates opportunities to improve data reusability, improve
trustworthiness, reduce maintenance and the number of duplicate datasets.
200: A plan is in place for which content ownership and management strategy to use and
Repeatable in which circumstances.
Initial steps are taken to improve the consistency and trustworthiness levels for self-
service BI efforts.
Guidance for the user community is available that includes expectations for self-
service versus enterprise content.
Roles and responsibilities are clear and well understood by everyone involved.
There's a plan in place for how to request and handle ownership transfers.
Managed self-service BI—and techniques for the reuse of data—are commonly used
and well-understood.
500: Proactive steps to communicate with users occur when any concerning activities are
Efficient detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about the scope
of content delivery.
Power BI adoption roadmap: Content
delivery scope
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
The four delivery scopes described in this article include personal BI, team BI,
departmental BI, and enterprise BI. To be clear, focusing on the scope of a delivered BI
solution does refer to the number of people who may view the solution, though the
impact is much more than that. The scope strongly influences best practices for content
distribution, sharing, security, and information protection. The scope has a direct
correlation to the level of governance (such as requirements for change management,
support, or documentation), the extent of mentoring and user enablement, and needs
for user support. It also influences user licensing decisions.
The related content ownership and management article makes similar points. Whereas
the focus of that article was on the content creator, the focus of this article is on the
target content usage. Both inter-related aspects need to be considered to arrive at
governance decisions and the Center of Excellence (COE) operating model.
) Important
Not all data and solutions are equal. Be prepared to apply different levels of data
management and governance to different teams and various types of content.
Standardized rules are easier to maintain, however flexibility or customization is
often necessary to apply the appropriate level of oversight for particular
circumstances. Your executive sponsor can prove invaluable by reaching consensus
across stakeholder groups when difficult situations arise.
Personal BI: Personal BI solutions are, as the name implies, intended for use by the
creator. Sharing content with others isn't an objective. Therefore, personal BI has
the fewest number of target consumers.
Team BI: Collaborates and shares content with a relatively small number of
colleagues who work closely together.
Departmental BI: Delivers content to a large number of consumers, who can
belong to a department or business unit.
Enterprise BI: Delivers content broadly across organizational boundaries to the
largest number of target consumers. Enterprise content is most often managed by
a centralized team and is subject to additional governance requirements.
Contrast the above four scopes of content delivery with the following diagram, which
has an inverse relationship with respect to the number of content creators.
The four scopes of content creators shown in the above diagram include:
Personal BI: Represents the largest number of creators because any user can work
with data using business-led self-service BI methods. Although managed self-
service BI methods can be used, it's less common with personal BI.
Team BI: Colleagues within a team collaborate and share with each other using
business-led self-service BI patterns. It has the next largest number of creators in
the organization. Managed self-service BI patterns may also begin to emerge as
skill levels advance.
Departmental BI: Involves a smaller population of creators. They're likely to be
considered power users who are using sophisticated tools to create sophisticated
solutions. Managed self-service BI practices are very common and highly
encouraged.
Enterprise BI: Involves the smallest number of content creators because it typically
includes only professional BI developers who work in the BI team, the COE, or in IT.
The content ownership and management article introduced the concepts of business-
led self-service BI, managed self-service BI, and enterprise BI. The most common
alignment between ownership and delivery scope is:
Business-led self-service BI ownership: Commonly deployed as personal and team
BI solutions.
Managed self-service BI ownership: Can be deployed as personal, team, or
departmental BI solutions.
Enterprise BI ownership: Deployed as enterprise BI-scoped solutions.
Some organizations also equate self-service content with community-based support. It's
the case when self-service content creators and owners are responsible for supporting
the content they publish. The user support article describes multiple informal and formal
levels for support.
7 Note
The term sharing can be interpreted two ways: It's often used in a general way
related to sharing content with colleagues, which could be implemented multiple
ways. It can also reference a specific feature in Power BI, which is a specific
implementation where a user or group is granted read-only access to a single item.
In this article, the term sharing is meant in a general way to describe sharing
content with colleagues. When the per-item sharing feature is intended, this article
will make a clear reference to that feature.
Personal BI
Personal BI is about enabling an individual to gain analytical value. It's also about
allowing them to more efficiently perform business tasks through the effective personal
use of data, information, and analytics. It could apply to any type of information worker
in the organization, not just data analysts and developers.
Sharing of content with others isn't the objective. Personal content can reside in Power
BI Desktop or in a personal workspace in the Power BI service. Usage of the personal
workspace is permitted with the free Power BI license.
The creator's primary intention is data exploration and analysis, rather than report
delivery.
The content is intended to be analyzed and consumed by one person: the creator.
The content may be an exploratory proof of concept that may, or may not, evolve
into a project.
Tip
See the adoption maturity levels article for information about how users progress
through the stages of user adoption. See the system oversight article for
information about usage tracking via the activity log.
Team BI
Team BI is focused on a team of people who work closely together, and who are tasked
with solving closely related problems using the same data. Collaborating and sharing
content with each other in a workspace is usually the primary objective. Due to this work
style, team members will typically each have a Power BI Pro or Power BI Premium Per
User (PPU) license.
Content is often shared among the team more informally as compared to departmental
or enterprise BI. For instance, the workspace is often sufficient for consuming content
within a small team. It doesn't require for formality of publishing the workspace to
distribute it as an app. There isn't a specific number of users when team-based delivery
is considered too informal; each team can find the right number that works for them.
Content is created, managed, and viewed among a group of colleagues who work
closely together.
Collaboration and co-management of content is the highest priority.
Formal delivery of reports may occur by report viewers (especially for managers of
the team), but it's usually a secondary priority.
Reports aren't always highly sophisticated or attractive; functionality and accessing
the information is what matters most.
Ensure the Center of Excellence (COE) is prepared to support the efforts of self-
service creators publishing content for their team.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. It's tempting to start with one workspace per team, but that
may not be flexible enough to satisfy all needs.
See the techniques described for business-led self-service BI and managed self-
service BI in the content ownership and management article. They're highly
relevant techniques that help content creators create efficient and effective team BI
solutions.
Departmental BI
Content is delivered to members of a department or business unit. Content distribution
to a larger number of consumers is a priority for departmental BI.
Usually there's a much larger number of consumers who are content viewers (versus a
much smaller number of content creators). Therefore, a combination of Power BI Pro
licenses, Premium Per User licenses, and/or Premium capacity licenses may be used.
Ensure the COE is prepared to support the efforts of self-service creators. Creators
who publish content used throughout their department or business unit may
emerge as candidates to become champions, or they may become candidates to
join the COE as a satellite member.
Make purposeful decisions about how workspace management will be handled.
The workspace is a place to organize related content, a permissions boundary, and
the scope for an app. Several workspaces will likely be required to meet all the
needs of a large department or business unit.
Plan how Power BI apps will distribute content to the enterprise. An app can
provide a significantly better user experience for consuming content. In many
cases, content consumers can be granted permissions to view content via the app
only, reserving workspace permissions management for content creators and
reviewers only. The use of app audience groups allows you to mix and match
content and target audience in a flexible way.
Be clear about what data quality validations have occurred. As the importance and
criticality level grows, expectations for trustworthiness grow too.
Ensure that adequate training, mentoring, and documentation is available to
support content creators. Best practices for data preparation, data modeling, and
data presentation will result in better quality solutions.
Provide guidance on the best way to use the promoted endorsement, and when
the certified endorsement may be permitted for departmental BI solutions.
Ensure that the owner is identified for all departmental content. Clarity on
ownership is helpful, including who to contact with questions, feedback,
enhancement requests, or support requests. In the Power BI service, content
owners can set the contact list property for many types of items (like reports and
dashboards). The contact list is also used in security workflows. For example, when
a user is sent a URL to open an app but they don't have permission, they'll be
presented with an option to make a request for access.
Consider using deployment pipelines in conjunction with separate workspaces.
Deployment pipelines can support development, test, and production
environments, which provide more stability for consumers.
Consider enforcing the use of sensitivity labels to implement information
protection on all content.
Include consistent branding on reports to align with departmental colors and
styling. It can also indicate who produced the content. For more information, see
the Content ownership and management article. A small image or text label in the
report footer is valuable when the report is exported from the Power BI service. A
standard Power BI Desktop template file can encourage and simplify the consistent
use of branding. For more information, see the Mentoring and user enablement
article.
See the techniques described for business-led self-service BI and managed self-
service BI in the content ownership and management article. They're highly
relevant techniques that help content creators create efficient and effective
departmental BI solutions.
Enterprise BI
Enterprise BI content is typically managed by a centralized team and is subject to
additional governance requirements. Content is delivered broadly across organizational
boundaries.
Checklist - Considerations and key actions you can take to strengthen your approach to
content delivery.
" Align goals for content delivery: Ensure that guidelines, documentation, and other
resources align with the strategic goals defined for Power BI adoption.
" Clarify the scopes for content delivery in your organization: Determine who each
scope applies to, and how each scope aligns with governance decisions. Ensure that
decisions and guidelines are consistent with how content ownership and
management is handled.
" Consider exceptions: Be prepared for how to handle situations when a smaller
team wants to publish content for an enterprise-wide audience.
Will it require the content be owned and managed by a centralized team? For
more information, see the Content ownership and management article, which
describes an inter-related concept with content delivery scope.
Will there be an approval process? Governance can become more complicated
when the content delivery scope is broader than the owner of the content. For
example, when an app that's owned by a divisional sales team is distributed to
the entire organization.
" Create helpful documentation: Ensure that you have sufficient training
documentation and support so that your content creators understand when it's
appropriate to use workspaces, apps, or per-item sharing (direct access or link) .
" Create a licensing strategy: Ensure that you have a specific strategy in place to
handle user licensing considerations for Power BI Pro, Premium Per User, and
Premium capacity. Create a process for how workspaces may be assigned each
license type, and the prerequisites required for the type of content that may be
assigned to Premium.
Maturity levels
The following maturity levels will help you assess the current state of your content
delivery.
100: Initial Content is published for consumers by self-service creators in an uncontrolled way,
without a specific strategy.
200: Pockets of good practices exist. However, good practices are overly dependent on
Repeatable the knowledge, skills, and habits of the content creator.
300: Clear guidelines are defined and communicated to describe what can and can't
Defined occur within each delivery scope. These guidelines are followed by some—but not
all—groups across the organization.
Level State of Power BI content delivery
400: Criteria are defined to align governance requirements for self-service versus
Capable enterprise content.
Guidelines for content delivery scope are followed by most, or all, groups across the
organization.
Changes are announced and follow a communication plan. Content creators are
aware of the downstream effects on their content. Consumers are aware of when
reports and apps are changed.
500: Proactively steps to communicate with users occur when any concerning activities
Efficient are detected in the activity log. Education and information are provided to make
gradual improvements or reduce risk.
The business value that's achieved for deployed solutions is regularly evaluated.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about the Center
of Excellence (COE).
Power BI adoption roadmap: Center of
Excellence
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
A COE might also be known as business intelligence (BI) competency center, capability
center, or a center of expertise. Some organizations use the term squad. Many
organizations perform the COE responsibilities within their BI team or analytics team.
7 Note
) Important
One of the most powerful aspects of a COE is the cross-departmental insight into
how Power BI is used by the organization. This insight can reveal which practices
work well and which don't, that can facilitate a bottom-up approach to governance.
A primary goal of the COE is to learn which practices work well, share that
knowledge more broadly, and replicate best practices across the organization.
Mentoring the internal Power BI community. For more information, see the
Community of practice article.
Producing, curating, and promoting training materials. For more information, see
the Mentoring and user enablement article.
Creating documentation and resources to encourage consistent use of standards
and best practices. For more information, see the Mentoring and user enablement
article.
Applying, communicating, and assisting with governance guidelines. For more
information, see the Governance article.
Handling and assisting with system oversight and administration. For more
information, see the System oversight article.
Responding to user support issues escalated from the help desk. For more
information, see the User support article.
Developing solutions and/or proofs of concept.
Establishing and maintaining the BI platform and data architecture.
Staffing a COE
People who are good candidates as COE members tend to be those who:
Tip
If you have Power BI creators in your organization who constantly push the
boundaries of what can be done, they might be a great candidate to become a
recognized champion, or perhaps even a member of the COE.
When recruiting for the COE, it's important to have a mix of complementary analytical
skills, technical skills, and business skills.
Role Description
Role Description
COE Manages the day-to-day operations of the COE. Interacts with the executive sponsor
leader and other organizational teams, such as the data governance board, as necessary. For
details of additional roles and responsibilities, see the Governance article.
Coach Coaches and educates others on BI skills via office hours (community engagement),
best practices reviews, or co-development projects. Oversees and participates in the
discussion channel of the internal community. Interacts with, and supports, the
champions network.
Trainer Develops, curates, and delivers internal training materials, documentation, and
resources.
Data Domain-specific subject matter expert. Acts as a liaison between the COE and the
analyst business unit. Content creator for the business unit. Assists with content certification.
Works on co-development projects and proofs of concept.
Data Creates and manages shared datasets and dataflows to support self-service content
modeler creators.
Data Plans Power BI deployment and architecture, including integration with Azure services
engineer and other data platforms. Publishes data assets which are utilized broadly across the
organization.
User Assists with the resolution of data discrepancies and escalated help desk support
support issues.
As mentioned previously, the scope of responsibilities for a COE can vary significantly
between organizations. Therefore, the roles found for COE members can vary too.
Structuring a COE
The selected COE structure can vary among organizations. It's also possible for multiple
structures to exist inside of a single large organization. That's particularly true when
there are subsidiaries or acquisitions have occurred.
7 Note
The following terms may differ to those defined for your organization, particularly
the meaning of federated, which tends to have many different IT-related meanings.
Centralized COE
A centralized COE comprises a single shared services team.
Pros:
There's a single point of accountability for a single team that manages standards,
best practices, and delivery end-to-end.
The COE is one group from an organizational chart perspective.
It's easy to start with this approach and then evolve to the unified or federated
model over time.
Cons:
Unified COE
A unified COE is a single, centralized, shared services team that has been expanded to
include embedded team members. The embedded team members are dedicated to
supporting a specific functional area or business unit.
Pros:
There's a single point of accountability for a single team that includes cross-
functional involvement from the embedded COE team members. The embedded
COE team members are assigned to various areas of the business.
The COE is one group from an organizational chart perspective.
The COE understands the needs of business units more deeply due to dedicated
members with domain expertise.
Cons:
The embedded COE team members, who are dedicated to a specific business unit,
have a different organizational chart responsibility than the people they serve
directly within the business unit. It may potentially lead to complications,
differences in priorities, or necessitate the involvement of the executive sponsor.
Preferably, the executive sponsor has a scope of authority that includes the COE
and all involved business units to help resolve conflicts.
Federated COE
A federated COE comprises a shared services team plus satellite members from each
functional area or major business unit. A federated team works in coordination, even
though its members reside in different business units. Typically, satellite members are
primarily focused on development activities to support their business unit while the
shared services personnel support the entire community.
Pros:
Cons:
Since core and satellite members span organizational boundaries, the federated
COE approach requires strong leadership, excellent communication, robust project
management, and ultra-clear expectations.
There's a higher risk of encountering competing priorities due to the federated
structure.
This approach typically involves part-time people and/or dotted line organizational
chart accountability that can introduce competing time pressures.
Tip
Decentralized COE
Decentralized COEs are independently managed by business units.
Pros:
A specialized data culture exists that's focused on the business unit, making it
easier to learn quickly and adapt.
Policies and practices are tailored to each business unit.
Agility, flexibility, and priorities are focused on the individual business unit.
Cons:
There's a risk that decentralized COEs operate in isolation. As a result, they might
not share best practices and lessons learned outside of their business unit.
Collaboration with a centralized team may be informal and/or inconsistent.
Inconsistent policies are created and applied across business units.
It's difficult to scale a decentralized model.
There's potential rework to bring one or more decentralized COEs in alignment
with organizational-wide policies.
Larger business units with significant funding may have more resources available
to them, which may not serve cost optimization goals from an organizational-wide
perspective.
) Important
Cost center.
Profit center with project budget(s).
A combination of cost center and profit center.
When the COE operates as a cost center, it absorbs the operating costs. Generally, it
involves an approved annual budget. Sometimes this is called a push engagement
model.
When the COE operates as a profit center (for at least part of its budget), it could accept
projects throughout the year based on funding from other business units. Sometimes
this is called a pull engagement model.
Funding is important because it impacts the way the COE communicates and engages
with the internal community. As the COE experiences more and more successes, they
may receive more requests from business units for help. It's especially the case as
awareness grows throughout the organization.
Tip
The choice of funding model can determine how the COE actively grows its
influence and ability to help. The funding model can also have a big impact on
where authority resides and how decision-making works. Further, it impacts the
types of services a COE can offer, such as co-development projects and/or best
practices reviews. For more information, see the Mentoring and user enablement
article.
Some organizations cover the COE operating costs with chargebacks to business units
based on the usage goals of Power BI. For a Power BI shared capacity, this could be
based on number of active users. For Premium capacity, chargebacks could be allocated
based on which business units are using the capacity. Ideally, chargebacks are directly
correlated to the business value gained.
Checklist - Considerations and key actions you can take to establish or improve your
Power BI COE.
" Define the scope of responsibilities for the COE: Ensure that you're clear on what
activities the COE can support. Once the scope of responsibilities is known, identify
the skills and competencies required to fulfill those responsibilities.
" Identify gaps in the ability to execute: Analyze whether the COE has the required
systems and infrastructure in place to meet its goals and scope of responsibilities.
" Determine the best COE structure: Identify which COE structure is most
appropriate (centralized, unified, federated, or decentralized). Verify that staffing,
roles and responsibilities, and appropriate organizational chart relationships (HR
reporting) are in place.
" Plan for future growth: If you're starting out with a centralized or decentralized
COE, consider how you will scale the COE over time by using the unified or
federated approach. Plan for any actions that you can take now that'll facilitate
future growth.
" Identify customers: Identify the internal customers, and any external customers, to
be served by the COE. Decide how the COE will generally engage with those
customers, whether it's a push model, pull model, or both models.
" Verify the funding model for the COE: Decide whether the COE is purely a cost
center with an operating budget, whether it will operate partially as a profit center,
and/or whether chargebacks to other business units will be required.
" Create a communication plan: Create you communications strategy to educate the
Power BI community about the services the COE offers, and how to engage with the
COE.
" Create goals and metrics: Determine how you'll measure effectiveness for the COE.
Create KPIs (key performance indicators) or OKRs (objectives and key results) to
validate that the COE consistently provides value to the user community.
Maturity levels
The following maturity levels will help you assess the current state of your COE.
100: Initial One or more COEs exist, or the activities are performed within the BI team or IT.
There's no clarity on the specific goals nor expectations for responsibilities.
Requests for assistance from the COE are handled in an unplanned manner.
200: The COE is in place with a specific charter to mentor, guide, and educate self-service
Repeatable users. The COE seeks to maximize benefits of self-service BI while reducing the risks.
The goals, scope of responsibilities, staffing, structure, and funding model are
established for the COE.
300: The COE operates with active involvement from all business units in a unified or
Defined federated mode.
Level State of Power BI Center of Excellence
400: The goals of the COE align with organizational goals, and they are reassessed
Capable regularly.
The COE is well-known throughout the organization, and consistently proves its
value to the internal user community.
500: Regular reviews of KPIs or OKRs evaluate COE effectiveness in a measurable way.
Efficient
Agility and implementing continual improvements from lessons learned (including
scaling out methods that work) are top priorities for the COE.
Next steps
In the next article in the Power BI adoption roadmap series, learn about implementing
governance guidelines, policies, and processes.
Also, consider reading about Microsoft's journey and experience with driving a data
culture. This article describes the importance of discipline at the core and flexibility at the
edge. It also shares Microsoft's views and experiences about the importance of
establishing a COE.
Power BI adoption roadmap:
Governance
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
Data governance is a broad and complex topic. This article introduces key concepts and
considerations. It identifies important actions to take when adopting Power BI, but it's
not a comprehensive reference for data governance.
As defined by the Data Governance Institute , data governance is "a system of decision
rights and accountabilities for information-related processes, executed according to
agreed-upon models which describe who can take what actions, with what information,
and when, under what circumstances, using what methods."
The term data governance is a misnomer. The primary focus for governance isn't on the
data itself. The focus is on governing what users do with the data. Put another way: The
true focus is on governing user's behavior to ensure organizational data is well-
managed.
When focused on self-service business intelligence, the primary goals of governance are
to achieve the proper balance of:
The optimal balance between control and empowerment will differ between
organizations. It's also likely to differ among different business units within an
organization. With a platform like Power BI, you'll be most successful when you put as
much emphasis on user empowerment as on clarifying its practical usage within
established guardrails.
Tip
Governance strategy
When considering data governance in any organization, the best place to start is by
defining a governance strategy. By focusing first on the strategic goals for data
governance, all detailed decisions when implementing governance policies and
processes can be informed by the strategy. In turn, the governance strategy will be
defined by the organization's data culture.
Empowering users throughout the organization to use data and make decisions,
within the defined boundaries.
Improving the user experience by providing clear and transparent guidance (with
minimal friction) on what actions are permitted, why, and how.
Ensuring that the data usage is appropriate for the needs of the business.
Ensuring that content ownership and stewardship responsibilities are clear. For
more information, see the Content ownership and management article.
Enhancing the consistency and standardization of working with data across
organizational boundaries.
Reducing risk of data leakage and misuse of data. For more information, see the
information protection and data loss prevention series of articles article.
Meeting regulatory, industry, and internal requirements for the proper use of data.
Tip
A well-executed data governance strategy makes it easier for more users to work
with data. When governance is approached from the perspective of user
empowerment, users are more likely to follow the documented processes.
Accordingly, the users become a trusted partner too.
Governance success factors
Governance isn't well-received when it's enacted with top-down mandates that are
focused more on control than empowerment. Governing Power BI is most successful
when:
Roll out Power BI first, then introduce governance: Power BI is made widely available
to users in the organization as a new self-service BI tool. Then, at some time in the
future, a governance effort begins. This method prioritizes agility.
Full governance planning first, then roll out Power BI: Extensive governance planning
occurs prior to permitting users to begin using Power BI. This method prioritizes
control and stability.
Choose method 1 when Power BI is already used for self-service scenarios, and you're
ready to start working in a more efficient manner.
Choose method 3 when you want to have a balance of control agility. This balanced
approach is the best choice for most organizations and most scenarios.
Pros:
Cons:
Pros:
Cons:
Pros:
Cons:
For more information about up-front planning, see the Preparing to migrate to Power BI
article.
Governance challenges
If your organization has implemented Power BI without a governance approach or
strategic direction (as described above by method 1), there could be numerous
challenges requiring attention. Depending on the approach you've taken and your
current state, some of the following challenges may be applicable to your organization.
Strategy challenges
Lack of a cohesive data governance strategy that aligns with the business strategy
Lack of executive support for governing data as a strategic asset
Insufficient adoption planning for advancing adoption and the maturity level of BI
and analytics
People challenges
Lack of aligned priorities between centralized teams and business units
Lack of identified champions with sufficient expertise and enthusiasm throughout
the business units to advance organizational adoption objectives
Lack of awareness of self-service best practices
Resistance to following newly introduced governance guidelines and policies
Duplicate effort spent across business units
Lack of clear accountability, roles, and responsibilities
Process challenges
Lack of clearly defined processes resulting in chaos and inconsistencies
Lack of standardization or repeatability
Insufficient ability to communicate and share lessons learned
Lack of documentation and over-reliance on tribal knowledge
Inability to comply with security and privacy requirements
Tip
Governance planning
Some organizations have implemented Power BI without a governance approach or
clear strategic direction (as described above by method 1). In this case, the effort to
begin governance planning can be daunting.
If a formal governance body doesn't currently exist in your organization, then the focus
of your governance planning and implementation efforts will be broader. If, however,
there's an existing data governance board in the organization, then your focus is
primarily to integrate with existing practices and customize them to accommodate the
objectives for self-service BI and enterprise BI.
) Important
Some potential governance planning activities and outputs that you may find valuable
are described next.
Strategy
Key activities:
Key output:
People
Key activities:
Key output:
Analyze immediate pain points, issues, risks, and areas to improve the user
experience
Prioritize data policies to be addressed by order of importance
Identify existing processes in place that work well and can be formalized
Determine how new data policies will be socialized
Decide to what extent data policies may differ or be customized for different
groups
Key output:
Process for how data policies and documentation will be defined, approved,
communicated, and maintained
Plan for requesting valid exceptions and departures from documented policies
Project management
The implementation of the governance program should be planned and managed as a
series of projects.
Key activities:
Key output:
) Important
The scope of activities listed above that will be useful to take on will vary
considerably between organizations. If your organization doesn't have existing
processes and workflows for creating these types of outputs, refer to the guidance
found in the Roadmap conclusion article for some helpful resources.
Governance policies
Decision criteria
All governance decisions should be in alignment with the established goals for
organizational adoption. Once the strategy is clear, more tactical governance decisions
will need to be made which affect the day-to-day activities of the self-service user
community. These types of tactical decisions correlate directly to the data policies that
get created.
Who owns and manages the BI content? The Content ownership and
management article introduced three types of strategies: business-led self-service
BI, managed self-service BI, and enterprise BI. Who owns and manages the content
has a significant impact on governance requirements.
What is the scope for delivery of the BI content? The Content delivery scope
article introduced four scopes for delivery of content: personal BI, team BI,
departmental BI, and enterprise BI. The scope of delivery has a considerable impact
on governance requirements.
What is the data subject area? The data itself, including its sensitivity level, is an
important factor. Some data domains inherently require tighter controls. For
instance, personally identifiable information (PII), or data subject to regulations,
should be subject to stricter governance requirements than less sensitive data.
Is the data, and/or the BI solution, considered critical? If you can't make an
informed decision easily without this data, you're dealing with critical data
elements. Certain reports and apps may be deemed critical because they meet a
set of predefined criteria. For instance, the content is delivered to executives.
Predefined criteria for what's considered critical helps everyone have clear
expectations. Critical data is usually subject to stricter governance requirements.
Tip
Different combinations of the above four criteria will result in different governance
requirements for Power BI content.
The following list includes items that you may choose to prioritize when introducing
governance for Power BI.
If you don't make governance decisions and communicate them well, users will use their
own judgment for how things should work—and that often results in inconsistent
approaches to common tasks.
Although not every governance decision needs to be made upfront, it's important that
you identify the areas of greatest risk in your organization. Then, incrementally
implement governance policies and processes that will deliver the most impact.
Data policies
A data policy is a document that defines what users can and can't do. You may call it
something different, but the goal remains the same: when decisions—such as those
discussed in the previous section—are made, they're documented for use and reference
by the community of users.
A data policy should be as short as possible. That way, it's easy for people to understand
what is being asked of them.
7 Note
Here are three common data policy examples you may choose to prioritize:
Policy Description
Data Specifies when an owner is required for a dataset, and what the data owner's
ownership responsibilities include, such as: supporting colleagues who view the content,
policy maintaining appropriate confidentiality and security, and ensuring compliance.
Data Specifies the process that is followed to certify a dataset. Requirements may
certification include activities such as: data accuracy validation, data source and lineage
(endorsement) review, technical review of the data model, security review, and documentation
policy review.
Data Specifies activities that are allowed and not allowed per classification (sensitivity
classification level). It should specify activities such as: allowed sharing with external users
and (with or without NDA), encryption requirements, and ability to download the
protection dataset. Sometimes, it's also called a data handling policy or a data usage policy.
policy For more information, see the Information protection for Power BI article.
U Caution
Having a lot of documentation can lead to a false sense that everything is under
control, which can lead to complacency. The level of engagement that the COE has
with the user community is one way to improve the chances that governance
guidelines and policies are consistently followed. Auditing and monitoring activities
are also important.
Scope of policies
Governance decisions will rarely be one-size-fits-all across the entire organization. When
practical, it's wise to start with standardized policies, and then implement exceptions as
needed. Having a clearly defined strategy for how policies will be handled for
centralized and decentralized teams will make it much easier to determine how to
handle exceptions.
Inflexible
Less autonomy and empowerment
Tip
Finding the right balance of standardization and customization for supporting self-
service BI across the organization can be challenging. However, by starting with
organizational policies and mindfully watching for exceptions, you can make
meaningful progress quickly.
) Important
Regardless of how the governance body is structured, it's important that there's a
person or group with sufficient influence over data governance decisions. This
person should have authority to enforce those decisions across organizational
boundaries.
Level Description
Tactical - Supporting teams: Level 2 includes several groups that support the efforts of
the users in the business units. Supporting teams include the COE, enterprise BI, the data
governance office, as well as other ancillary teams. Ancillary teams can include IT, security,
HR, and legal. A change control board is included here as well.
Tactical - Audit and compliance: Level 3 includes internal audit, risk management, and
compliance teams. These teams provide guidance to levels 1 and 2. They also provide
enforcement when necessary.
Strategic - Executive sponsor and steering committee: The top level includes the
executive-level oversight of strategy and priorities. This level handles any escalated issues
that couldn't be solved at lower levels. Therefore, it's important to have people with
sufficient authority to be able to make decisions when necessary.
) Important
Role Description
Role Description
Chief Data Defines the strategy for use of data as an enterprise asset. Oversees enterprise-
Officer or wide governance guidelines and policies.
Chief
Analytics
Officer
Data Steering committee with members from each business unit who, as domain
governance owners, are empowered to make enterprise governance decisions. They make
board decisions on behalf of the business unit and in the best interest of the
organization. Provides approvals, decisions, priorities, and direction to the
enterprise data governance team and working committees.
Data Temporary or permanent teams that focus on individual governance topics, such
governance as security or data quality.
working
committees
Change Coordinates the requirements, processes, approvals, and scheduling for release
management management processes with the objective of reducing risk and minimizing the
board impact of changes to critical applications.
Project Manages individual governance projects and the ongoing data governance
management program.
office
Power BI Promotes adoption and the successful use of Power BI. Actively ensures that
executive Power BI decisions are consistently aligned with business objectives, guiding
sponsor principles, and policies across organizational boundaries. For more information,
see the Executive sponsorship article.
Center of Mentors the community of creators and consumers to promote the effective use
Excellence of Power BI for decision-making. Provides cross-departmental coordination of
Power BI activities to improve practices, increase consistency, and reduce
inefficiencies. For more information, see the Center of Excellence article.
Power BI A subset of content creators found within the business units who help advance
champions the adoption of Power BI. They contribute to data culture growth by advocating
the use of best practices and actively assisting colleagues. For more information,
see the Community of practice article.
Risk Reviews and assesses data sharing and security risks. Defines ethical data policies
management and standards. Communicates regulatory and legal requirements.
Data steward Collaborates with governance committee and/or COE to ensure that
organizational data has acceptable data quality levels.
All BI creators Adheres to policies for ensuring that data is secure, protected, and well-
and managed as an organizational asset.
consumers
Tip
Name a backup for each person in key roles, for example, members of the data
governance board. In their absence, the backup person can attend meetings and
make time-sensitive decisions when necessary.
Checklist - Considerations and key actions you can take to establish or strengthen your
governance initiatives.
" Align goals and guiding principles: Confirm that the high-level goals and guiding
principles of the data culture goals are clearly documented and communicated.
Ensure that alignment exists for any new governance guidelines or policies.
" Understand what's currently happening: Ensure that you have a deep
understanding of how Power BI is currently used for self-service BI and enterprise
BI. Document opportunities for improvement. Also, document strengths and good
practices that would be helpful to scale out more broadly.
" Prioritize new governance guidelines and policies: For prioritizing which new
guidelines or policies to create, select an important pain point, high priority need,
or known risk for a data domain. It should have significant benefit and can be
achieved with a feasible level of effort. When you implement your first governance
guidelines, choose something users are likely to support because the change is low
impact, or because they are sufficiently motivated to make a change.
" Create a schedule to review policies: Determine the cadence for how often data
policies are reevaluated. Reassess and adjust when needs change.
" Decide how to handle exceptions: Determine how conflicts, issues, and requests
for exceptions to documented policies will be handled.
" Understand existing data assets: Confirm that you understand what critical data
assets exist. Create an inventory of ownership and lineage, if necessary. Keep in
mind that you can't govern what you don't know about.
" Verify executive sponsorship: Confirm that you have support and sufficient
attention from your executive sponsor, as well as from business unit leaders.
" Prepare an action plan: Include the following key items:
Initial priorities: Select one data domain or business unit at a time.
Timeline: Work in iterations long enough to accomplish meaningful progress, yet
short enough to periodically adjust.
Quick wins: Focus on tangible, tactical, and incremental progress.
Success metrics: Create measurable metrics to evaluate progress.
Maturity levels
The following maturity levels will help you assess the current state of your governance
initiatives.
100: Initial Due to a lack of governance planning, the good data management and informal
governance practices that are occurring are overly reliant on judgment and
experience level of individuals.
200: Some areas of the organization have made a purposeful effort to standardize,
Repeatable improve, and document their data management and governance practices.
300: A complete governance strategy with focus, objectives, and priorities is enacted and
Defined broadly communicated.
Specific governance guidelines and policies are implemented for the top few
priorities (pain points or opportunities). They're actively and consistently followed by
users.
400: All Power BI governance priorities align with organizational goals and business
Capable objectives. Goals are reassessed regularly.
It's clear where Power BI fits in to the overall BI strategy for the organization.
Power BI activity log and API data is actively analyzed to monitor and audit Power BI
activities. Proactive action is taken based on the data.
500: Regular reviews of KPIs or OKRs evaluate measurable governance goals. Iterative,
Efficient continual progress is a priority.
Power BI activity log and API data is actively used to inform and improve adoption
and governance efforts.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about mentoring
and user enablement.
Power BI adoption roadmap: Mentoring
and user enablement
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
A critical objective for adoption efforts is to enable users to accomplish as much as they
can within the requisite guardrails established by governance guidelines and policies.
For this reason, the act of mentoring users is one of the most important responsibilities
of the Center of Excellence (COE), and it has a direct influence on how user adoption
occurs. For more information about user adoption, see the Power BI adoption maturity
levels article.
Skills mentoring
Mentoring and helping users in the Power BI community become more effective can
take on various forms, such as:
Office hours
Co-development projects
Best practices reviews
Extended support
Office hours
Office hours are a form of ongoing community engagements managed by the COE. As
the name implies, office hours are times of regularly scheduled availability where
members of the community can engage with experts from the COE to receive assistance
with minimal process overhead. Since office hours are group-based, Power BI
champions and other members of the community can also pitch in to help solve an
issue if a topic is in their area of expertise.
Office hours are a very popular and productive activity in many organizations. Some
organizations call them drop-in hours or even a fun name such as Power Hour. The
primary goal is usually to get questions answered, solve problems, and remove blockers.
Office hours can also be used as a platform for the user community to share ideas,
suggestions, and even complaints.
The COE publishes the times for regular office hours when one or more COE members
are available. Ideally, office hours are held on a regular and frequent basis. For instance,
it could be every Tuesday and Thursday. Consider offering different time slots or
rotating times if you have a global workforce.
Tip
One option is to set specific office hours each week. However, people may or may
not show up, so that can end up being inefficient. Alternatively, consider leveraging
Microsoft Bookings to schedule office hours. It shows the blocks of time when
each COE expert is available, with Outlook integration ensuring availability is up to
date.
Content creators and the COE actively collaborate to answer questions and solve
problems together.
Real work is accomplished while learning and problem solving.
Others may observe, learn, and participate.
Individual groups can head to a breakout room to solve a specific problem.
They're a great way for the COE to identify champions or people with specific skills
that the COE didn't previously know about.
The COE can learn what people throughout the organization are struggling with. It
helps inform whether additional resources, documentation, or training might be
required.
Tip
It's common for some tough issues to come up during office hours that cannot be
solved quickly, such as getting a complex DAX calculation to work. Set clear
expectations for what's in scope for office hours, and if there's any commitment for
follow up.
Co-development projects
One way the COE can provide mentoring services is during a co-development project. A
co-development project is a form of assistance offered by the COE where a user or
business unit takes advantage of the technical expertise of the COE to solve business
problems with data. Co-development involves stakeholders from the business unit and
the COE working in partnership to build a high-quality self-service BI solution that the
business stakeholders couldn't deliver independently.
The goal of co-development is to help the business unit develop expertise over time
while also delivering value. For example, the sales team has a pressing need to develop
a new set of commission reports, but the sales team doesn't yet have the knowledge to
complete it on their own.
A co-development project forms a partnership between the business unit and the COE.
In this arrangement, the business unit is fully invested, deeply involved, and assumes
ownership for the project.
Time involvement from the COE reduces over time until the business unit gains expertise
and becomes self-reliant.
The active involvement shown in the above diagram changes over time, as follows:
Ideally, the period for the gradual reduction in involvement is identified up-front in the
project. This way, both the business unit and the COE can sufficiently plan the timeline
and staffing.
Co-development projects can deliver significant short- and long-term benefits. In the
short term, the involvement from the COE can often result in a better-designed and
better-performing solution that follows best practices and aligns with organizational
standards. In the long term, co-development helps increase the knowledge and
capabilities of the business stakeholder, making them more self-sufficient, and more
confident to deliver quality self-service BI solutions in the future.
) Important
Essentially, a co-development project helps less experienced users learn the right
way to do things. It reduces risk that refactoring might be needed later, and it
increases the ability for a solution to scale and grow over time.
During a review, an expert from the COE evaluates self-service Power BI content
developed by a member of the community and identifies areas of risk or opportunities
for improvement. The following bullet list presents some examples of when a best
practices review could be beneficial:
The sales team has an app that they intend to distribute to thousands of users
throughout the organization. Since the app represents high priority content
distributed to a large audience, they'd like to have it certified. The standard
process to certify content includes a best practices review.
The finance team would like to assign a workspace to Premium capacity. A review
of the workspace content is required to ensure sound development practices were
followed. This type of review is common when the capacity is shared among
multiple business units. (A review may not be required when the capacity is
assigned to only one business unit.)
The operations team is creating a new solution they expect to be widely used. They
would like to request a best practices review before it goes into user acceptance
testing (UAT), or before a request is submitted to the change management board.
A best practices review is most often focused on the dataset design, though the review
can encompass all types of Power BI items (dataflows, datasets, reports, or apps).
Before content is deployed to the Power BI service, a best practices review may verify
that:
Data sources used are appropriate and query folding is invoked whenever possible.
Connectivity mode and storage mode choices (for example, import, live
connection, DirectQuery composite model frameworks) are appropriate.
Location for data sources, like flat files, and original Power BI Desktop files are
suitable (preferably stored in a backed-up location with versioning and appropriate
security, such as Teams files or a SharePoint shared library).
Data preparation steps are clean, orderly, and efficient.
Datasets are well-designed, clean, and understandable (a star schema design is
highly recommended).
Relationships are configured correctly.
DAX calculations use efficient coding practices (particularly if the data model is
large).
The dataset size is within a reasonable limit and data reduction techniques are
applied.
Row-level security (RLS) appropriately enforces data permissions.
Data is accurate and has been validated against the authoritative source(s).
Approved common definitions and terminology are used.
Good data visualization practices are followed, including designing for
accessibility.
Once the content has been deployed to the Power BI service, the best practices review
isn't necessarily complete yet. Completing the remainder of the review may also include
items such as:
Extended support
From time to time, the COE may get involved with complex issues escalated from the
help desk. For more information, see the User support article.
7 Note
Offering mentoring services might be a culture shift for your organization. Your
reaction might be that users don't usually ask for help with a tool like Excel, so why
would they with Power BI? The answer lies in the fact that Power BI is an
extraordinarily powerful tool, providing data preparation and data modeling
capabilities in addition to data visualization. The complexity of the tool inherently
means that there's a significant learning curve to develop mastery. Having the
ability to aid and enable users can significantly improve their skills and increase the
quality of their solutions—it reduces risks too.
Centralized portal
A single centralized portal, or hub, is where the user community can find:
Tip
In general, only 10%-20% of your community will go out of their way to actively
seek out training and educational information. These types of people might
naturally evolve to become your Power BI champions. Everyone else is usually just
trying to get the job done as quickly as possible because their time, focus, and
energy are needed elsewhere. Therefore, it's important to make information easy
for your community users to find.
The goal is to consistently direct users in the community to the centralized portal to find
information. The corresponding obligation for the COE is to ensure that the information
users need is available in the centralized portal. Keeping the portal updated requires
discipline when everyone is busy.
) Important
It takes time for community users to think of the centralized portal as their natural first
stop for finding information. It takes consistent redirection to the portal to change
habits. Sending someone a link to an original document location in the portal builds
better habits than, for instance, including the answer in an email response. It's the same
challenge described in the User support article.
Training
A key factor for successfully enabling users in a Power BI community is training. It's
important that the right training resources are readily available and easily discoverable.
While some users are so enthusiastic about Power BI that they'll find information and
figure things out on their own, it isn't true for most of the user community.
Making sure your community users have access to the training resources they need to
be successful doesn't mean that you need to develop your own training content.
Developing training content is often counterproductive due to the rapidly evolving
nature of the product. Fortunately, an abundance of training resources is available in the
worldwide community. A curated set of links goes a long way to help users organize and
focus their training efforts, especially for tool training, which focuses on the technology.
All external links should be validated by the COE for accuracy and credibility. It's a key
opportunity for the COE to add value because COE stakeholders are in an ideal position
to understand the learning needs of the community, and to identify and locate trusted
sources of quality learning materials.
You'll find the greatest return on investment with creating custom training materials for
organizational-specific processes, while relying on content produced by others for
everything else. It's also useful to have a short training class that focuses primarily on
topics like how to find documentation, getting help, and interacting with the
community.
Tip
One of the goals of training is to help people learn new skills while helping them
avoid bad habits. It can be a balancing act. For instance, you don't want to
overwhelm people by adding in a lot of complexity and friction to a beginner-level
class for report creators. However, it's a great investment to make newer content
creators aware of things that could otherwise take them a while to figure out. An
ideal example is teaching the ability to use a live connection to report from an
existing dataset. By teaching this concept at the earliest logical time, you can save a
less experienced creator thinking they always need one dataset for every report
(and encourage the good habit of reusing existing datasets across reports).
Some larger organizations experience continual employee transfers and turnover. Such
frequent change results in an increased need for a repeatable set of training resources.
Content consumers
Report creators
Data creators (datasets and dataflows)
Content owners, subject matter experts, and workspace administrators
Satellite COE members and the champions network
Power BI administrators
) Important
Each type of user represents a different audience that has different training needs.
The COE will need to identify how best to meet the needs of each audience. For
instance, one audience might find a standard introductory Power BI Desktop class
overwhelming, whereas another will want more challenging information with depth
and detail. If you have a diverse population of Power BI content creators, consider
creating personas and tailoring the experience to an extent that's practical.
The completion of training can be a leading indicator for success with user adoption.
Some organizations grant badges, like blue belt or black belt, as people progress
through the training programs.
Give some consideration to how you want to handle users at various stages of user
adoption. Training to onboard new users (sometimes referred to as training day zero)
and for less experienced users is very different to training for more experienced users.
How the COE invests its time in creating and curating training materials will change over
time as adoption and maturity grows. You may also find over time that some community
champions want to run their own tailored set of training classes within their functional
business unit.
Sources for trusted Power BI training content
A curated set of online resources is valuable to help community members focus and
direct their efforts on what's important. Some publicly available training resources you
might find helpful include:
Consider using Microsoft Viva Learning , which is integrated into Microsoft Teams. It
includes content from sources such as Microsoft Learn and LinkedIn Learning . Custom
content produced by your organization can be included as well.
If you do make the investment to create custom in-house training, consider creating
short, targeted content that focuses on solving one specific problem. It makes the
training easier to find and consume. It's also easier to maintain and update over time.
Tip
The Help and Support menu in the Power BI service is customizable. Once your
centralized location for training documentation is operational, update the tenant
setting in the Admin portal with the link. The link can then be accessed from menu
when users select the Get Help option. Also, be sure to teach users about the Help
ribbon tab in Power BI Desktop. It includes links to guided learning, training videos,
documentation, and more.
Documentation
Concise, well-written documentation can be a significant help for users trying to get
things done. Your needs for documentation, and how it's delivered, will depend on how
Power BI is managed in your organization. For more information, see the Content
ownership and management article.
Certain aspects of Power BI tend to be managed by a centralized team, such as the COE.
The following types of documentation are helpful in these situations:
How to request a Power BI license (and whether there are requirements for
manager approval)
How to request a new Premium capacity
How to request a new workspace
How to request a workspace be added to Premium capacity
How to request access to a gateway data source
How to request software installation
Tip
For certain activities that are repeated over and over, consider automating them
using Power Apps and Power Automate. In this case, your documentation will also
include how to access and use the Power Platform functionality.
Tip
When planning for a centralized portal, as described earlier in this article, plan how
to handle situations when guidance or governance policies need to be customized
for one or more business units.
There are also going to be some governance decisions that have been made and should
be documented, such as:
) Important
One of the most helpful pieces of documentation you can publish for the
community is a description of the tenant settings, and the group memberships
required for each tenant setting. Users read about features and functionality online,
and sometimes find that it doesn't work for them. When they are able to quickly
look up your organization's tenant settings, it can save them from becoming
frustrated and attempting workarounds. Effective documentation can reduce the
number of help desk tickets that are submitted. It can also reduce the number of
people who need to be assigned the Power BI administrator role (who might have
this role solely for the purpose of viewing settings).
Over time, you may choose to allow some documentation to be maintained by the
community if you have willing volunteers. In this case, you may want to introduce an
approval process for changes.
When you see questions repeatedly arise in the Q&A forum (as described in the User
support article), during office hours, or during lunch and learns, it's a great indicator that
creating new documentation may be appropriate. When the documentation exists, it
allows colleagues to reference it when needed. It contributes to user enablement and a
self-sustaining community.
Tip
Templates
A Power BI template is a .pbit file. It can be provided as a starting point for content
creators. It's the same as a .pbix file, which can contain queries, a data model, and a
report, but with one exception: the template file doesn't contain any data. Therefore, it's
a smaller file that can be shared with the community, and it doesn't present a risk of
inappropriately sharing data.
Providing Power BI template files for your community is a great way to:
Promote consistency.
Reduce learning curve.
Show good examples and best practices.
Increase efficiency.
Power BI template files can improve efficiency and help people learn during the normal
course of their work. A few ways that template files are helpful include:
7 Note
Providing templates not only saves your content creators time, it also helps them
move quickly beyond a blank page in an empty solution.
Checklist - Considerations and key actions you can take to establish, or improve,
mentoring and user enablement.
" Consider what mentoring services the COE can support: Decide what types of
mentoring services the COE is capable of offering. Types may include office hours,
co-development projects, and best practices reviews.
" Communicate about mentoring services regularly: Decide how you will
communicate and advertise mentoring services, such as office hours, to the user
community.
" Establish a regular schedule for office hours: Ideally, hold office hours at least once
per week (depending on demand from users as well as staffing and scheduling
constraints).
" Decide what the expectations will be for office hours: Determine what the scope
of allowed topics or types of issues users can bring to office hours. Also, determine
how the queue of office hours requests will work, whether any information should
be submitted ahead of time, and whether any follow up afterwards can be
expected.
" Create a centralized portal: Ensure that you have a well supported centralized hub
where users can easily find Power BI training, documentation, and resources. The
centralized portal should also provide links to other community resources such as
the Q&A forum and how to find help.
" Create documentation and resources: In the centralized portal, being creating and
compiling useful documentation. Identify and promote the top 3-5 resources that
will be most useful to the user community.
" Update documentation and resources regularly: Ensure that content is reviewed
and updated on a regular basis. The objective is to ensure that the information
available in the portal is current and reliable.
" Compile a curated list of reputable training resources: Identify training resources
that target the training needs and interests of your user community. Post the list in
the centralized portal and create a schedule to review and validate the list.
" Consider whether custom in-house training will be useful: Identify whether
custom training courses, developed in-house, will be useful and worth the time
investment. Invest in creating content that's specific to the organization.
" Create goals and metrics: Determine how you'll measure effectiveness of the
mentoring program. Create KPIs (key performance indicators) or OKRs (objectives
and key results) to validate that the COE's mentoring efforts strengthen the
community and its ability to provide self-service BI.
Maturity levels
The following maturity levels will help you assess the current state of your mentoring
and user enablement.
100: Initial Some documentation and resources exist. However, they're siloed and inconsistent.
Few users are aware of, or take advantage of, available resources.
200: A centralized portal exists with a library of helpful documentation, and resources.
Repeatable
A curated list of training links and resources are available in the centralized portal.
Office hours are available so the user community can get assistance from the COE.
300: The centralized portal is the primary hub for community members to locate training,
Defined documentation, and resources. The resources are commonly referenced by
champions and community members when supporting and learning from each
other.
The COE's skills mentoring program is in place to assist users in the community in
various ways.
400: Office hours have regular and active participation from all business units in the
Capable organization.
Best practices reviews from the COE are regularly requested by business units.
Co-development projects are repeatedly executed with success by the COE and
members of business units.
500: Training, documentation, and resources are continually updated and improved by
Efficient the COE to ensure the community has current and reliable information.
Measurable and tangible business value is gained from the mentoring program by
using KPIs or OKRs.
Next steps
In the next article in the Power BI adoption roadmap series, learn more about the
community of practice.
Power BI adoption roadmap:
Community of practice
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
A community of practice is a group of people with a common interest that interacts with,
and helps, each other on a voluntary basis. Using Power BI to produce effective analytics
is a common interest that can bring people together across an organization.
Champions are the smallest group among creators and SMEs. Self-service content
creators and SMEs represent a larger number of people. Content consumers represent
the largest number of people.
7 Note
All references to the Power BI community in this adoption series of articles refer to
internal users, unless explicitly stated otherwise. There's an active and vibrant
worldwide community of bloggers and presenters who produce a wealth of
knowledge about Power BI. However, internal users are the focus of this article.
For information about related topics including resources, documentation, and training
provided for the Power BI community, see the Mentoring and user enablement article.
Champions network
One important part of a community of practice is its champions. A champion is a Power
BI content creator who works in a business unit that engages with the COE. A champion
is recognized by their peers as the go-to Power BI expert. A champion continually builds
and shares their knowledge even if it's not an official part of their job role. Power BI
champions influence and help their colleagues in many ways including solution
development, learning, skills improvement, troubleshooting, and keeping up to date.
Have a deep interest in Power BI being used effectively and adopted successfully
throughout the organization.
Possess strong Power BI skills as well as domain knowledge for their functional
business unit.
Have an inherent interest in getting involved and helping others.
Are early adopters who are enthusiastic about experimenting and learning.
Can effectively translate business needs into solutions.
Communicate well with colleagues.
Tip
Often, people aren't directly asked to become champions. Commonly, champions are
identified by the COE and recognized for the activities they're already doing, such as
frequently answering questions in an internal discussion channel or participating in
lunch and learns.
Different approaches will be more effective for different organizations, and each
organization will find what works best for them as their maturity level increases.
) Important
Someone very well may be acting in the role of a champion without even knowing
it, and without a formal recognition. The COE should always be on the lookout for
champions. COE members should actively monitor the discussion channel to see
who is helpful. They should deliberately encourage and support potential
champions, and when appropriate, invite them into a champions network to make
the recognition formal.
Knowledge sharing
The overriding objective of a community of practice is to facilitate knowledge sharing
among colleagues and across organizational boundaries. There are many ways
knowledge sharing occurs. It could be during the normal course of work. Or, it could be
during a more structured activity, such as:
Activity Description
Discussion A Q&A forum where anyone in the community can post and view messages. Often
channel used for help and announcements. For more information, see the User support
article.
Activity Description
Lunch and Regularly scheduled sessions where someone presents a short session about
learn something they've learned or a solution they've created. The goal is to get a variety
sessions of presenters involved, because it's a powerful message to hear firsthand what
colleagues have achieved.
Office Regularly scheduled times when COE experts are available so the community can
hours with engage with them. Community users can receive assistance with minimal process
the COE overhead. For more information, see the Mentoring and user enablement article.
Book club A subset of the community select a book to read on a schedule. They discuss what
they've learned and share their thoughts with each other.
Tip
Inviting an external presenter can reduce the effort level and bring a fresh
viewpoint for learning and knowledge sharing.
Incentives
A lot of effort goes into forming and sustaining a successful community. It's
advantageous to everyone to empower and reward users who work for the benefit of
the community.
Tip
Rewarding champions
Incentives that champions find particularly rewarding can include:
More direct access to the COE: The ability to have connections in the COE is
valuable. It's depicted in the diagram shown earlier in this article.
Champion of the month: Publicly thank one of your champions for something
outstanding they did during the previous month. It could be a fun tradition at the
beginning of a monthly lunch and learn.
A private experts discussion area: A private area for the champions to share ideas
and learn from each other is usually highly valued.
Specialized or deep dive information and training: Access to additional
information to help champions grow their skillsets (as well as help their colleagues)
will be appreciated. It could include attending advanced training classes or
conferences.
Communication plan
Communication with the community occurs through various types of communication
channels. Common communication channels include:
Internal discussion channel or forum.
Announcements channel.
Organizational newsletter.
The most critical communication objectives include ensuring your community members
know that:
Tip
Types of communication
There are generally four types of communication to plan for:
Tip
Community resources
Resources for the internal community, such as documentation, templates, and training,
are critical for adoption success. For more information about resources, see the
Mentoring and user enablement article.
Checklist - Considerations and key actions you can take for the community of practice
follow.
" Clarify goals: Clarify what your specific goals are for cultivating a champions
network. Make sure these goals align with your overall Power BI strategy, and that
your executive sponsor is on board.
" Create a plan for the champions network: Although some aspects of a champions
network will always be informally led, determine to what extent the COE will
purposefully cultivate and support champion efforts throughout individual business
units. Consider how many champions is ideal for each functional business area.
Usually, 1-2 champions per area works well, but it can vary based on the size of the
team, the needs of the self-service community, and how the COE is structured.
" Decide on commitment level for champions: Decide what level of commitment
and expected time investment will be required of Power BI champions. Consider
whether the time investment will vary wildly from person to person, and team to
team due to different responsibilities. Plan to clearly communicate expectations to
people who are interested to get involved. Obtain manager approval when
appropriate.
" Decide how to identify champions: Determine how you will respond to requests to
become a champion, and how the COE will seek out champions. Decide if you will
openly encourage interested employees to self-identify as a champion and ask to
learn more (less common). Or, whether the COE will observe efforts and extend a
private invitation (more common).
" Determine how members of the champions network will be managed: Once
excellent option for managing who the champions are is with a security group.
Consider:
How you will communicate with the champions network (for example, in a Teams
channel, a Yammer group, and/or an email distribution list).
How the champions network will communicate and collaborate with each other
directly (across organizational boundaries).
Whether a private and exclusive discussion forum for champions and COE
members is appropriate
" Plan resources for champions: Ensure members of the champions network have
the resources they need, including:
Direct access to COE members.
Influence on data policies being implemented (for example, requirements for a
dataset certification policy).
Influence on the creation of best practices and guidance (for example,
recommendations for accessing a specific source system).
" Involve champions: Actively involve certain champions as satellite members of the
COE. For more information about federating the COE, see the Center of Excellence
article.
" Create a feedback loop for champions: Ensure that members of the champions
network can easily provide information or submit suggestions to the COE.
" Routinely provide recognition and incentives for champions: Not only is praise an
effective motivator, but the act of sharing examples of successful efforts can
motivate and inspire others.
Introduce incentives:
" Identify incentives for champions: Consider what type of incentives you could offer
to members of your champions network.
" Identify incentives for community members: Consider what type of incentives you
could offer to your broader internal community.
Improve communications:
Maturity levels
The following maturity levels will help you assess the current state of your community of
practice.
100: Initial Some self-service content creators are doing great work throughout the
organization. However, their efforts aren't recognized.
Goals for transparent communication with the user community are defined.
400: Champions are identified for all business units. They actively support colleagues in
Capable their self-service efforts.
500: Bidirectional feedback loops exist between the champions network and the COE.
Efficient
Key performance indicators measure community engagement and satisfaction.
Automation is in place when it adds direct value to the user experience (for example,
automatic access to the community).
Next steps
In the next article in the Power BI adoption roadmap series, learn about user support.
Power BI adoption roadmap: User
support
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
This article addresses user support. It focuses primarily on the resolution of issues.
The first sections of this article focus on user support aspects you have control over
internally within your organization. The final topics focus on external resources that are
available.
The following diagram shows some common types of user support that organizations
employ successfully:
The six types of user support shown in the above diagram include:
Type Description
Type Description
Intra-team support (internal) is very informal. Support occurs when team members learn
from each other during the natural course of their job.
Help desk support (internal) handles formal support issues and requests.
Extended support (internal) involves handling complex issues escalated by the help desk.
Microsoft support (external) includes support for licensed users and administrators. It
also includes comprehensive Power BI documentation.
In some organizations, intra-team and internal community support are most relevant for
self-service BI (content is owned and managed by creators and owners in decentralized
business units). Conversely, the help desk and extended support are reserved for
technical issues and enterprise BI (content is owned and managed by a centralized
business intelligence team or Center of Excellence). In some organizations, all four types
of support could be relevant for any type of content.
Tip
For more information about business-led self-service BI, managed self-service BI,
and enterprise BI concepts, see the Content ownership and management article.
Each of the four types of internal user support introduced above are described in further
detail in this article.
Intra-team support
Intra-team support refers to when team members learn from and help each other during
their daily work. People who emerge as your Power BI champions tend to take on this
type of informal support role voluntarily because they have an intrinsic desire to help.
Although it's an informal support mode, it shouldn't be undervalued. Some estimates
indicate that a large percentage of learning at work is peer learning, which is particularly
helpful for analysts who are creating domain-specific Power BI solutions.
7 Note
Intra-team support does not work well for individuals who are the only data analyst
within a department. It's also not effective for those who don't have very many
connections yet in their organization. When there aren't any close colleagues to
depend on, other types of support, as described in this article, become more
important.
Tip
Be sure to cultivate multiple experts in the more difficult topics like Data
Analysis eXpressions (DAX) and the Power Query M formula language.
When someone becomes a recognized expert, they may become
overburdened with too many requests for help.
A greater number of community members may readily answer certain types of
questions (for example, report visualizations), whereas a smaller number of
members will answer others (for example, complex DAX). It's important for the
COE to allow the community a chance to respond yet also be willing to
promptly handle unanswered questions. If users repeatedly ask questions and
don't receive an answer, it will significantly hinder growth of the community.
In this case, a user is likely to leave and never return if they don't receive any
responses to their questions.
An internal community discussion channel is commonly set up as a Teams channel or a
Yammer group. The technology chosen should reflect where users already work, so that
the activities occur within their natural workflow.
One benefit of an internal discussion channel is that responses can come from people
that the original requester has never met before. In larger organizations, a community of
practice brings people together based on a common interest. It can offer diverse
perspectives for getting help and learning in general.
Use of an internal community discussion channel allows the Center of Excellence (COE)
to monitor the kind of questions people are asking. It's one way the COE can understand
the issues users are experiencing (commonly related to content creation, but it could
also be related to consuming content).
Monitoring the discussion channel can also reveal additional Power BI experts and
potential champions who were previously unknown to the COE.
) Important
Another key benefit of a discussion channel is that it's searchable, which allows other
people to discover the information. It is, however, a change of habit for people to ask
questions in an open forum rather than private messages or email. Be sensitive to the
fact that some individuals aren't comfortable asking questions in such a public way. It
openly acknowledges what they don't know, which might be embarrassing. This
reluctance may reduce over time by promoting a friendly, encouraging, and helpful
discussion channel.
Tip
You may be tempted to create a bot to handle some of the most common,
straightforward questions from the community. A bot can work for uncomplicated
questions such as "How do I request a Power BI license?" or "How do I request a
workspace?" Before taking this approach, consider if there are enough routine and
predictable questions that would make the user experience better rather than
worse. Often, a well-created FAQ (frequently asked questions) works better, and it's
faster to develop and easier to maintain.
There are also certain technical issues that can't be fully resolved without IT involvement,
like software installation and upgrade requests when machines are IT-managed.
Busy help desk personnel are usually dedicated to supporting multiple technologies. For
this reason, the easiest types of issues to support are those which have a clear resolution
and can be documented in a knowledgebase. For instance, software installation
prerequisites or requirements to get a license.
Some organizations ask the help desk to handle only very simple break-fix issues. Other
organizations have the help desk get involved with anything that is repeatable, like new
workspace requests, managing gateway data sources, or requesting a new Premium
capacity.
) Important
Your Power BI governance decisions will directly impact the volume of help desk
requests. For example, if you choose to limit workspace creation permissions in
the tenant settings, it will result in users submitting help desk tickets. While it's a
legitimate decision to make, you must be prepared to satisfy the request very
quickly. Respond to this type of request within 1-4 hours, if possible. If you delay
too long, users will use what they already have or fine a way to work around your
requirements. That may not be the ideal scenario. Promptness is critical for certain
help desk requests. Consider that automation by using Power Apps and Power
Automate can help make some processes more efficient.
Over time, troubleshooting and problem resolution skills become more effective as help
desk personnel expand their knowledgebase and experience with Power BI. The best
help desk personnel are those who have a good grasp of what users need to accomplish
with Power BI.
Tip
Purely technical issues, for example data refresh failure or the need to add a new
user to a gateway data source, usually involve straightforward responses
associated with a service level agreement. For instance, there may be an agreement
to respond to blocking issues within one hour and resolve them within eight hours.
It's generally more difficult to define service level agreements (SLAs) for
troubleshooting issues, like data discrepancies.
Extended support
Since the COE has deep insight into how Power BI is used throughout the organization,
they're a great option for extended support should a complex issue arise. Involving the
COE in the support process should be by an escalation path.
Managing requests as purely an escalation path from the help desk gets difficult to
enforce since COE members are often well-known to business users. To encourage the
habit of going through the proper channels, COE members should redirect users to
submit a help desk ticket. It will also improve the data quality for analyzing help desk
requests.
Microsoft support
In addition to the internal user support approaches discussed in this article, there are
valuable external support options directly available to Power BI users and administrators
that shouldn't be overlooked.
Microsoft documentation
Check the Power BI support site high-priority issues that broadly affect all customers.
Global Microsoft 365 administrators have access to additional support issue details
within the Microsoft 365 portal.
Monitor the Microsoft 365 Twitter account . Microsoft posts timely information and
updates about outages for all Microsoft 365 services.
Refer to the comprehensive Power BI documentation. It's an authoritative resource that
can aid troubleshooting and search for information. You can prioritize results from the
Power BI documentation site. For example, enter a site-targeted search request into
your web search engine, like power bi dataset site:learn.microsoft.com .
Tip
Make it clear to your internal user community whether you prefer technical issues
be reported to the internal help desk. If your help desk is equipped to handle the
workload, having a centralized internal area collect user issues can provide a
superior user experience versus every user trying to resolve issues on their own.
Having visibility and analyzing support issues is also helpful for the COE.
Administrator support
There are several support options available for global and Power BI administrators.
For customers who have a Microsoft Unified Support contract, consider granting help
desk and COE members access to the Microsoft Services Hub . One advantage of the
Microsoft Services Hub is that your help desk and COE members can be set up to
submit and view support requests.
Community documentation
The Power BI global community is vibrant. Every day, there are a great number of Power
BI blog posts, articles, webinars, and videos published. When relying on community
information for troubleshooting, watch out for:
How recent the information is. Try to verify when it was published or last updated.
Whether the situation and context of the solution found online truly fits your
circumstance.
The credibility of the information being presented. Rely on reputable blogs and
sites.
Checklist - Considerations and key actions you can take for user support follow.
" Determine help desk responsibilities: Decide what the initial scope of Power BI
topics that the help desk will handle.
" Assess the readiness level: Determine whether your help desk is prepared to
handle Power BI support. Identify whether there are readiness gaps to be
addressed.
" Arrange for additional training: Conduct knowledge transfer sessions or training
sessions to prepare the help desk staff.
" Update the help desk knowledgebase: include known Power BI questions and
answers in a searchable knowledgebase. Ensure someone is responsible for regular
updates to the knowledgebase to reflect new and enhanced features over time.
" Set up a ticket tracking system: Ensure a good system is in place to track requests
submitted to the help desk.
" Decide if anyone will be on-call for any issues related to Power BI: If appropriate,
ensure the expectations for 24/7 support are clear.
" Determine what SLAs will exist: When a specific service level agreement (SLA)
exists, ensure that expectations for response and resolution are clearly documented
and communicated.
" Be prepared to act quickly: Be prepared to address specific common issues
extremely quickly. Slow support response will result in users finding workarounds.
" Determine how escalated support will work: Decide what the escalation path will
be for requests the help desk cannot directly handle. Ensure that the COE (or
equivalent personnel) is prepared to step in when needed. Clearly define where
help desk responsibilities end, and where COE extended support responsibilities
begin.
" Encourage collaboration between COE and system administrators: Ensure that
COE members have a direct escalation path to reach global administrators for
Microsoft 365 and Azure. It's critical to have a communication channel when a
widespread issue arises that's beyond the scope of Power BI.
" Create a feedback loop from the COE back to the help desk: When the COE learns
of new information, the IT knowledgebase should be updated. The goal is for the
primary help desk personnel to continually become better equipped at handling
more issues in the future.
" Create a feedback loop from the help desk to the COE: When support personnel
observe redundancies or inefficiencies, they can communicate that information to
the COE, who might choose to improve the knowledgebase or get involved
(particularly if it relates to governance or security).
Maturity levels
The following maturity levels will help you assess the current state of your Power BI user
support.
100: Initial Individual business units find effective ways of supporting each other. However, the
tactics and practices are siloed and not consistently applied.
200: The COE actively encourages intra-team support and growth of the champions
Repeatable network.
The internal discussion channel gains traction. It becomes known as the default
place for Power BI questions and discussions.
The help desk handles a small number of the most common Power BI technical
support issues.
300: The internal discussion channel is popular and largely self-sustaining. The COE
Defined actively monitors and manages the discussion channel to ensure that all questions
are answered quickly and correctly.
400: The help desk is fully trained and prepared to handle a broader number of known
Capable and expected Power BI technical support issues.
SLAs are in place to define help desk support expectations, including extended
support. The expectations are documented and communicated so they're clear to
everyone involved.
500: Bidirectional feedback loops exist between the help desk and the COE.
Efficient
Key performance indicators measure satisfaction and support methods.
Automation is in place to allow the help desk to react faster and reduce errors (for
example, use of APIs and scripts).
Next steps
In the next article in the Power BI adoption roadmap series, learn about system
oversight and administration activities.
Power BI adoption roadmap: System
oversight
Article • 08/31/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
) Important
Your organizational data culture objectives provide direction for your governance
decisions, which in turn dictate how Power BI administration activities take place
and by whom.
System oversight is a broad and deep topic. The goal of this article is to introduce some
of the most important considerations and actions to help you become successful with
your organizational adoption objectives.
Power BI administrators
The Fabric administrator role is a defined role in Microsoft 365, which delegates a subset
of Power BI-specific management activities. Global Microsoft 365 administrators are
implicitly Power BI administrators. Power Platform administrators are also implicitly
Power BI administrators (however, Power BI administrators don't have the same role in
other Power Platform applications).
) Important
Tip
The best type of person to assign as a Power BI administrator is one who has
enough knowledge about Power BI to understand what self-service users need to
accomplish. With this understanding, the administrator can balance user
empowerment and governance.
There are several types of Power BI administrators. The following table describes the
roles that are used most often on a regular basis.
Power BI Power BI Manages tenant settings and other aspects of the Power BI
administrator tenant service. All general references to administrator in this article
refer to this type of administrator.
Power BI Premium One Manages workspaces, workloads, and monitors the health
capacity capacity of a Premium capacity.
administrator
The Power BI ecosystem is broad and deep. There are many ways that the Power BI
service integrates with other systems and platforms. From time to time, it will be
necessary to work with other system administrators and IT professionals, such as:
The remainder of this article discusses the most common activities that a Power BI
administrator does. It focuses on those activities that are important to carry out
effectively when taking a strategic approach to Power BI organizational adoption.
Service management
Overseeing the Power BI service is a crucial aspect to ensure that all users have a good
experience with Power BI.
Tenant settings
Proper management of tenant settings in the Power BI service is critical. Tenant settings
are the main way to control which Power BI capabilities are enabled, and for which
groups of users in your organization.
It's essential that tenant settings align with governance guidelines and policies, and with
how the COE makes decisions. If a Power BI administrator independently decides which
settings to enable or disable, that's a clear indicator of an opportunity to improve and
refine your governance processes.
) Important
Changing the tenant settings should go through a change control process with an
approval mechanism. It should document all changes, recording who made the
change, when, and why.
Because content creators and consumers can easily read online about available features
in Power BI, it can be frustrating for users when capabilities don't function as expected.
It can lead to dissatisfied users and less effective organizational adoption, user adoption,
and solution adoption.
U Caution
An administrator may discover situations that aren't ideal, such as too many data
exports in the activity log. Resist the urge to disable the feature entirely. Prohibiting
features leads to user frustration, and leads users to find workarounds. Before
disabling a setting, find out why users are relying on certain techniques. Perhaps a
solution needs to be redesigned, or additional user education and training could
mitigate the concerns. The bottom line: knowledge sharing is an effective form of
governance.
Since there's no reader role to view tenant settings, it can be a challenge in larger
organizations. Consider publishing a document to the centralized portal that describes
the tenant settings, as described in the mentoring and user enablement article.
The following activities apply when reviewing and validating each tenant setting:
Tenant setting:
Enabled, or
Disabled
Tenant setting applicable to:
The entire organization, or
Limited to specific security group(s):
Does a suitable security group already exist?, or
Does a new security group need to be created?
Admin portal
As described in the Power BI adoption maturity levels article, organizational adoption
refers to the effectiveness of Power BI governance and data management practices to
support and enable enterprise BI and self-service BI. Actively managing all areas of the
Power BI service in accordance with adoption goals helps ensure that all your users have
a good experience with Power BI.
Tenant settings
Auditing and monitoring
Workspace management and access
Premium capacity and Premium Per User settings
Embed codes
Organizational visuals
Azure connections
Custom branding
Protection metrics
Featured content
User machines and devices
The management of user machines and devices is usually a responsibility of the IT
department. The adoption of Power BI depends directly on content creators and
consumers having the applications they need.
Here are some questions that can help you plan for user machines and devices.
How will users request access to new tools? Will access to licenses, data, and
training be available to help users use tools effectively?
How will content consumers view content that's been published by others?
How will content creators develop, manage, and publish content? What's your
criteria for deciding which tools and applications are appropriate for each use
case?
How will you install and set up tools? Does that include related prerequisites and
data connectivity components?
How will you manage ongoing updates for tools and applications?
Software Audience
Power BI Desktop Content creators who develop data models and interactive reports for
deployment to the Power BI service.
Power BI Desktop Content creators who develop data models and interactive reports for
Optimized for Report deployment to Power BI Report Server.
Server
Power BI Report Content creators who develop paginated reports for deployment to the
Builder Power BI service or Power BI Report Server.
Power BI Mobile Content creators or consumers who interact with content that's been
Application published to the Power BI service or Power BI Report Server, using iOS,
Android, or Windows 10 applications.
On-premises data Content creators who publish datasets to the Power BI service and
gateway (personal manage scheduled data refresh (see more detail in the Gateway
mode) architecture and management section of this article).
Third-party tools Advanced content creators may optionally use third-party tools for
advanced data model management.
) Important
Not all the listed software will be necessary for all content creators. Power BI
Desktop is the most common requirement and is the starting point when in doubt.
All content creators who collaborate with others should use the same version of the
software—especially Power BI Desktop, which is updated monthly. Ideally, software
updates are available from the Microsoft Store or installed by an automated IT process.
This way, users don't have to take any specific action to obtain updates.
Because new capabilities are continually released, software updates should be released
promptly. This way, users can take advantage of the new capabilities, and their
experience is aligned to documentation. It's also important to be aware of the update
channel. It provides new (and updated) features for Office apps, such as Excel and Word,
on a regular basis.
Other common items that may need to be installed on user machines include:
Drivers to support data connectivity, for example, Oracle, HANA, or the Microsoft
Access Database Engine
The Analyze in Excel provider
External tools. For example, Tabular Editor, DAX Studio, or ALM Toolkit.
Custom data source connectors
Group policy settings: For example, group policy can specify the allowed usage of
custom visuals in Power BI Desktop. The objective is for a consistent user
experience in Power BI Desktop and the Power BI service. The objective is to
prevent user frustration (if they were allowed to create content in Power BI
Desktop that can't be displayed in the Power BI service).
Registry settings: For example, you can choose to disable the Power BI Desktop
sign-in form or tune Query Editor performance.
Tip
Effective management of software, drivers, and settings can make a big difference
to the user experience, and that can translate to increased user adoption and
satisfaction, and reduced user support costs.
Architecture
Data architecture
Data architecture refers to the principles, practices, and methodologies that govern and
define what data is collected, and how it's ingested, stored, managed, integrated,
modeled, and used.
There are many data architecture decisions to make. Frequently the COE engages in
data architecture design and planning. It's common for administrators to get involved as
well, especially when they manage databases or Azure infrastructure.
) Important
Where does Power BI fit into the organization's entire data architecture? And, are
there other existing components such as an enterprise data warehouse (EDW) or a
data lake that will be important to factor into plans?
Is Power BI used end-to-end for data preparation, data modeling, and data
presentation? Or, is Power BI used only some of those capabilities?
Are managed self-service BI patterns followed to find the best balance between
data reusability and report creator flexibility?
Where will users consume the content? Generally, the three main ways to deliver
content are: the Power BI service, Power BI Report Server, and embedded in
custom applications. The Planning a Power BI enterprise deployment whitepaper
includes a section on Power BI architectural choices, which describes when to
consider each of these three main choices. Additionally, Microsoft Teams is a
convenient alternative to the Power BI service, especially for users who spend a lot
of time in Teams.
Who is responsible for managing and maintaining the data architecture? Is it a
centralized team, or a decentralized team? How is the COE represented in this
team? Are certain skillsets required?
What data sources are the most important? What types of data will we be
acquiring?
What connectivity mode and storage mode choices (for example, import, live
connection, DirectQuery, or composite model frameworks) are the best fit for the
use cases?
To what extent is data reusability encouraged using shared datasets?
To what extent is the reusability of data preparation logic and advanced data
preparation encouraged by using dataflows?
When becoming acquainted with Power BI, many system administrators assume it's a
query tool much like SQL Server Reporting Services (SSRS). The breadth of capabilities
for Power BI, however, are vast in comparison. So, it's important for administrators to
become aware of Power BI capabilities before they make architectural decisions.
Tip
Get into the good habit of completing a technical proof of concept (POC) to test
out assumptions and ideas. Some organizations also call them micro-projects when
the goal is to deliver a small unit of work. The goal of a POC is to address
unknowns and reduce risk as early as possible. A POC doesn't have to be
throwaway work, but it should be narrow in scope. Best practices reviews, as
discussed in the Mentoring and user enablement article, are another useful way to
help content creators with important architectural decisions.
Power BI Premium can play a significant role in your BI strategy. Some top reasons to
invest in Premium include:
This list isn't all-inclusive. For a complete list of Premium features, see Power BI Premium
FAQ.
Overseeing the health of Power BI Premium capacity is an essential ongoing activity for
administrators. By definition, Premium capacity includes a fixed level of system
resources. It equates to memory and CPU limits that must be managed to achieve
optimal performance.
U Caution
Lack of management and exceeding the limits of Premium capacity can often result
in performance challenges and user experience challenges. Both challenges, if not
managed correctly, can contribute to negative impact on adoption efforts.
Create a specific set of criteria for content that will be published to Premium
capacity. It's especially relevant when a single capacity is used by multiple business
units because the potential exists to disrupt other users if the capacity isn't well-
managed. For a list of items that may be included in the best practices review (such
as reasonable dataset size and efficient calculations), see the Mentoring and user
enablement article.
Regularly use the Premium monitoring app to understand resource utilization and
patterns for the Premium capacity. Most importantly, look for consistent patterns
of overutilization, which will contribute to user disruptions. An analysis of usage
patterns should also make you aware if the capacity is underutilized, indicating
more value could be gained from the investment.
Configure the tenant setting so Power BI notifies you if the Premium capacity
becomes overloaded , or an outage or incident occurs.
Autoscale
Autoscale is a capability of Power BI Premium. It's intended to handle occasional or
unexpected bursts in Premium usage levels. Autoscale can respond to these bursts by
automatically increasing CPU resources to support the increased workload.
Automated scaling up reduces the risk of performance and user experience challenges
in exchange for a financial impact. If the Premium capacity isn't well-managed, autoscale
may trigger more often than expected. In this case, the Premium monitoring app can
help you to determine underlying issues and do capacity planning.
Be aware that workspace administrators can also assign a workspace to PPU if the
workspace administrator possesses a PPU license. However, it would require that all
other workspace users must also have a PPU license.
Here's an example that describes one way you could manage Premium capacity.
The limits per capacity are lower. The maximum memory size allowed for datasets
isn't the entire P3 capacity node size. Rather, it's the assigned capacity size where
the dataset is hosted.
It's more likely one of the smaller capacities will need to be scaled up at some
point in time.
There are more capacities to manage in the Power BI tenant.
Tip
The decision of who can install gateway software is a governance decision. For
most organizations, use of the data gateway in standard mode, or a virtual network
data gateway, should be strongly encouraged. They're far more scalable,
manageable, and auditable than data gateways in personal mode.
Managed by centralized data owners (includes data sources that are used broadly across
the organization; management is centralized to avoid duplicated data sources):
Managed by IT:
Tip
User licenses
Every user of the Power BI service needs a commercial license, which is integrated with
an Azure AD identity. The user license may be Free, Power BI Pro, or Power BI Premium
Per User.
Self-service purchasing
An important governance decision relates to what extent self-service purchasing will be
allowed or encouraged.
There are serious cost concerns that would make it unlikely to grant full licenses at
the end of the trial period.
Prerequisites are required for obtaining a license (such as approval, justification, or
a training requirement). It's not sufficient to meet this requirement during the trial
period.
There's a valid need, such as a regulatory requirement, to control access to the
Power BI service closely.
Tip
Don't introduce too many barriers to obtaining a Power BI license. Users who need
to get work done will find a way, and that way may involve workarounds that aren't
ideal. For instance, without a license to use the Power BI service, people may rely far
too much on sharing files on a file system or via email when significantly better
approaches are available.
Cost management
Managing and optimizing the cost of cloud services, like Power BI, is an important
activity. Here are several activities you may want to consider.
Analyze who is using—and, more to the point, not using—their allocated Power BI
licenses and make necessary adjustments. Power BI usage is analyzed using the
activity log.
Analyze the cost effectiveness of Premium capacity or Premium Per User. In
addition to the additional features, perform a cost/benefit analysis to determine
whether Premium licensing is more cost-effective when there are a large number
of consumers. Unlimited content distribution is only available with Premium
capacity, not PPU licensing.
Carefully monitor and manage Premium capacity. Understanding usage patterns
over time will allow you to predict when to purchase more capacity. For example,
you may choose to scale up a single capacity from a P1 to P2, or scale out from
one P1 capacity to two P1 capacities.
If there are occasional spikes in the level of usage, use of autoscale with Power BI
Premium is recommended to ensure the user experience isn't interrupted.
Autoscale will scale up capacity resources for 24 hours, then scale them back down
to normal levels (if sustained activity isn't present). Manage autoscale cost by
constraining the maximum number of v-cores, and/or with spending limits set in
Azure (because autoscale is supported by the Azure Power BI Embedded service).
Due to the pricing model, autoscale is best suited to handle occasional unplanned
increases in usage.
For Azure data sources, co-locate them in the same region as your Power BI tenant
whenever possible. It will avoid incurring Azure egress charges . Data egress
charges are minimal, but at scale can add up to be considerable unplanned costs.
The Power BI security whitepaper is an excellent resource for understanding the breadth
of considerations, including aspects that Microsoft manages. This section will introduce
several topics that customers are responsible for managing.
User responsibilities
Some organizations ask Power BI users to accept a self-service user acknowledgment.
It's a document that explains the user's responsibilities and expectations for
safeguarding organizational data.
One way to automate its implementation is with an Azure AD terms of use policy. The
user is required to agree to the policy before they're permitted to visit the Power BI
service for the first time. You can also require it to be acknowledged on a recurring
basis, like an annual renewal.
Data security
In a cloud shared responsibility model, securing the data is always the responsibility of
the customer. With a self-service BI platform, self-service content creators have
responsibility for properly securing the content that they shared with colleagues.
The COE should provide documentation and training where relevant to assist content
creators with best practices (particularly situations for dealing with ultra-sensitive data).
External user access is controlled by tenant settings in the Power BI service and certain
Azure AD settings. For details of external user considerations, review the Distribute
Power BI content to external guest users using Azure AD B2B whitepaper.
Data residency
For organizations with requirements to store data within a geographic region, Premium
capacity (not PPU) can be configured for a specific region that's different from the
region of the Power BI home tenant.
Encryption keys
Microsoft handles encryption of data at rest in Microsoft data centers with transparent
server-side encryption and auto-rotation of certificates. For customers with regulatory
requirements to manage the Premium encryption key themselves, Premium capacity can
be configured to use Azure Key Vault. Using customer-managed keys—also known as
bring-your-own-key or BYOK—is a precaution to ensure that, in the event of a human
error by a service operator, customer data can't be exposed.
Be aware that Premium Per User (PPU) only supports BYOK when it's enabled for the
entire Power BI tenant.
There are different ways to approach auditing and monitoring depending on your role
and your objectives. The following articles describe various considerations and planning
activities.
You can retrieve auditing data to build an auditing solution, manage content
programmatically, or increase the efficiency of routine actions. The following table
presents some actions you can perform with the REST APIs.
Audit workspaces, items, and Collection of asynchronous metadata scanning REST APIs
permissions to obtain a tenant inventory
Audit content shared to entire REST API to check use of widely shared links
organization
Manage gateway data sources REST API to update credentials for a gateway data source
Programmatically retrieve a query REST API to run a DAX query against a dataset
result from a dataset
Tip
There are many other REST APIs. For a complete list, see Using the Power BI REST
APIs.
) Important
Don't underestimate the importance of staying current. If you get a few months
behind on announcements, it can become difficult to properly manage the Power
BI service and support your users.
Checklist - Considerations and key actions you can take for system oversight follow.
" Review tenant settings: Conduct a review of all tenant settings to ensure they're
aligned with data culture objectives and governance guidelines and policies. Verify
which groups are assigned for each setting.
" Document the tenant settings: Create documentation of your tenant settings for
the internal Power BI community and post it in the centralized portal. Include which
groups a user would need to request to be able to use a feature.
" Customize the Get Help links: When user resources are established, as described in
the Mentoring and user enablement article, update the tenant setting to customize
the links under the Get Help menu option. It will direct users to your
documentation, community, and help.
" Create a consistent onboarding process: Review your process for how onboarding
of new content creators is handled. Determine if new requests for software, such as
Power BI Desktop, and user licenses (Power BI Pro or Premium Per User) can be
handled together. It can simplify onboarding since new content creators won't
always know what to ask for.
" Handle user machine updates: Ensure an automated process is in place to install
and update software, drivers, and settings to ensure all users have the same version.
" Assess what your end-to-end data architecture looks like: Make sure you're clear
on:
How Power BI is currently used by the different business units in your
organization versus how you want Power BI to be used. Determine if there's a
gap.
If there are any risks that should be addressed.
If there are any high-maintenance situations to be addressed.
What data sources are important for Power BI users, and how they're
documented and discovered.
" Review existing data gateways: Find out what gateways are being used throughout
your organization. Verify that gateway administrators and users are set correctly.
Verify who is supporting each gateway, and that there's a reliable process in place
to keep the gateway servers up to date.
" Verify use of personal gateways: Check the number of personal gateways that are
in use, and by whom. If there's significant usage, take steps to move towards use of
the standard mode gateway.
" Review the process to request a user license: Clarify what the process is, including
any prerequisites, for users to obtain a license. Determine whether there are
improvements to be made to the process.
" Determine how to handle self-service license purchasing: Clarify whether self-
service licensing purchasing is enabled. Update the settings if they don't match
your intentions for how licenses can be purchased.
" Confirm how user trials are handled: Verify user license trials are enabled or
disabled. Be aware that all user trials are Premium Per User. They apply to Free
licensed users signing up for a trial, and Power BI Pro users signing up for a
Premium Per User trial.
" Clarify exactly what the expectations are for data protection: Ensure the
expectations for data protection, such as how to use sensitivity labels, are
documented and communicated to users.
" Determine how to handle external users: Understand and document the
organizational policies around sharing Power BI content with external users. Ensure
that settings in the Power BI service support your policies for external users.
" Set up monitoring: Investigate the use of Microsoft Defender for Cloud Apps to
monitor user behavior and activities in the Power BI service.
Improve auditing and monitoring:
" Plan for auditing needs: Collect and document the key business requirements for
an auditing solution. Consider your priorities for auditing and monitoring. Make key
decisions related to the type of auditing solution, permissions, technologies to be
used, and data needs. Consult with IT to clarify what auditing processes currently
exist, and what preferences of requirements exist for building a new solution.
" Consider roles and responsibilities: Identify which teams will be involved in
building an auditing solution, as well as the ongoing analysis of the auditing data.
" Extract and store user activity data: If you aren't currently extracting and storing
the raw data, begin retrieving user activity data.
" Extract and store snapshots of tenant inventory data: Begin retrieving metadata to
build a tenant inventory, which describes all workspaces and items.
" Extract and store snapshots of users and groups data: Begin retrieving metadata
about users, groups, and service principals.
" Create a curated data model: Perform data cleansing and transformations of the
raw data to create a curated data model that'll support analytical reporting for your
auditing solution.
" Analyze auditing data and act on the results: Create analytic reports to analyze the
curated auditing data. Clarify what actions are expected to be taken, by whom, and
when.
" Include additional auditing data: Over time, determine whether other auditing
data would be helpful to complement the activity log data, such as security data.
Tip
" Plan for your use of the REST APIs: Consider what data would be most useful to
retrieve from the Power BI REST APIs.
" Conduct a proof of concept: Do a small proof of concept to validate data needs,
technology choices, and permissions.
Maturity levels
The following maturity levels will help you assess the current state of your Power BI
system oversight.
100: Initial Tenant settings are configured independently by one or more administrators
based on their best judgment.
Power BI activity logs are unused, or selectively used for tactical purposes.
200: The tenant settings purposefully align with established governance guidelines and
Repeatable policies. All tenant settings are reviewed regularly.
A well defined process exists for users to request licenses and software. Request
forms are easy for users to find. Self-service purchasing settings are specified.
Sensitivity labels are configured in Microsoft 365. However, use of labels remains
inconsistent. The advantages of data protection aren't well understood by users.
300: Defined The tenant settings are fully documented in the centralized portal for users to
reference, including how to request access to the correct groups.
An automated process is in place to export Power BI activity log and API data to a
secure location for reporting and auditing.
400: Capable Administrators work closely with the COE and governance teams to provide
oversight of Power BI. A balance of user empowerment and governance is
successfully achieved.
Automated policies are set up and actively monitored in Microsoft Defender for
Cloud Apps for data loss prevention.
Level State of Power BI system oversight
Power BI activity log and API data is actively analyzed to monitor and audit Power
BI activities. Proactive action is taken based on the data.
500: Efficient The Power BI administrators work closely with the COE actively stay current. Blog
posts and release plans from the Power BI product team are reviewed frequently
to plan for upcoming changes.
Regular cost management analysis is done to ensure user needs are met in a cost-
effective way.
Power BI activity log and API data is actively used to inform and improve adoption
and governance efforts.
Next steps
For more information about system oversight and Power BI administration, see the
following resources.
In the next article in the Power BI adoption roadmap series, in conclusion, learn about
adoption-related resources that you might find valuable.
Power BI adoption roadmap: Change
management
Article • 09/11/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
When working toward improved business intelligence (BI) adoption, you should plan for
effective change management. In the context of BI, change management includes
procedures that address the impact of change for people in an organization. These
procedures safeguard against disruption and productivity loss due to changes in
solutions or processes.
7 Note
Helps content creators and consumers use analytics more effectively and sooner.
Limits redundancy in data, analytical tools, and solutions.
Reduces the likelihood of risk-creating behaviors that affect shared resources (like
Premium or Fabric capacity) or organizational compliance (like data security and
privacy).
Mitigates resistance to change that obstructs planning and inhibits user adoption.
Mitigates the impact of change and improving user wellbeing by reducing the
potential for disruption, stress, and conflict.
) Important
Change management is a fundamental obstacle to success in many organizations.
Effective change management requires that you understand that it's about people
—not tools or processes.
Tip
Consider the following types of change to manage when you plan for Power BI
adoption.
Process-level changes
Process-level changes are changes that affect a broader user community or the entire
organization. These changes typically have a larger impact, and so they require more
effort to manage. Specifically, this change management effort includes specific plans
and activities.
7 Note
Solution-level changes
Solution-level changes are changes that affect a single solution or set of solutions.
These changes limit their impact to the user community of those solutions and their
dependent processes. Although solution-level changes typically have a lower impact,
they also tend to occur more frequently.
7 Note
In the context of this article, a solution is built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a data lakehouse, a
Power BI dataset, or a report. The considerations for change management
described in this article are relevant for all types of solutions, and not only
reporting projects.
How you prepare change management plans and activities will depend on the types of
change. To successfully and sustainably manage change, we recommend that you
implement incremental changes.
The following steps outline how you can incrementally address change.
1. Define what's changing: Describe the change by outlining the before and after
states. Clarify the specific parts of the process or situation that you'll change,
remove, or introduce. Justify why this change is necessary, and when it should
occur.
2. Describe the impact of the change: For each of these changes, estimate the
business impact. Identify which processes, teams, or individuals the change affects,
and how disruptive it will be for them. Also consider any downstream effects the
change has on other dependent solutions or processes. Downstream effects may
result in other changes. Additionally, consider how long the situation remained the
same before it was changed. Changes to longer-standing processes tend to have a
higher impact, as preferences and dependencies arise over time.
3. Identify priorities: Focus on the changes with the highest potential impact. For
each change, outline a more detailed description of the changes and how it will
affect people.
4. Plan how to incrementally implement the change: Identify whether any high-
impact changes can be broken into stages or parts. For each part, describe how it
might be incrementally implemented in phases to limit its impact. Determine
whether there are any constraints or dependencies (such as when changes can be
made, or by whom).
5. Create an action plan for each phase: Plan the actions you will take to implement
and support each phase of the change. Also, plan for how you can mitigate
disruption in high-impact phases. Be sure to include a rollback plan in your action
plan, whenever possible.
Tip
Iteratively plan how you'll implement each phase of these incremental changes as
part of your quarterly BI tactical planning.
When you plan to mitigate the impact of changes on Power BI adoption, consider the
activities described in the following sections.
What's changing: What the situation is now and what it will be after the change.
Why it's changing: The benefit and value of the change for the audience.
When it's changing: An estimation of when the change will take effect.
Further context: Where people can go for more information.
Contact information: Who people should contact provide feedback, ask questions,
or raise concerns.
) Important
You should communicate change with sufficient advanced notice so that people are
prepared. The higher the potential impact of the change, the earlier you should
communicate it. If unexpected circumstances prevent advance notice, be sure to
explain why in your communication.
Here are some actions you can take to plan for training and support.
Centralize training and support by using a centralized portal. The portal can help
organize discussions, collect feedback, and distribute training materials or
documentation by topic.
Consider incentives to encourage self-sustaining support within a community.
Schedule recurring office hours to answer questions and provide mentorship.
Create and demonstrate end-to-end scenarios for people to practice a new
process.
For high-impact changes, prepare training and support plans that realistically
assess the effort and actions needed to prevent the change from causing
disruption.
7 Note
These training and support actions will differ depending on the scale and scope of
the change. For high-impact, large-scale changes (like transitioning from enterprise
BI to managed self-service BI), you'll likely need to plan iterative, multi-phase plans
that span multiple planning periods. In this case, carefully consider the effort and
resources needed to deliver success.
U Caution
Resistance to change from the executive leadership is often a warning sign that
stronger business alignment is needed between the business and BI strategies. In
this scenario, consider specific alignment sessions and change management actions
with executive leadership.
Involve stakeholders
To effectively manage change, you can also take a bottom-up approach by engaging the
stakeholders, who are the people the change affects. When you create an action plan to
address the changes, identify and engage key stakeholders in focused, limited sessions.
In this way you can understand the impact of the change on the people whose work will
be affected by the change. Take note of their concerns and their ideas for how you
might lessen the impact of this change. Ensure that you identify any potentially
unexpected effects of the change on other people and processes.
Involve your executive sponsor: The authority, credibility, and influence of the
executive sponsor is essential to support change management and resolve
disputes.
Identify blocking issues: When change disrupts the way people work, this change
can prevent people from effectively completing tasks in their regular activities. For
such blocking issues, identify potential workarounds when you take into account
the changes.
Focus on data and facts instead of opinions: Resistance to change is sometimes
due to opinions and preferences, because people are familiar with the situation
prior to the change. Understand why people have these opinions and preferences.
Perhaps it's due to convenience, because people don't want to invest time and
effort in learning new tools or processes.
Focus on business questions and processes instead of requirements: Changes
often introduce new processes to address problems and complete tasks. New
processes can lead to a resistance to change because people focus on what they
miss instead of fully understanding what's new and why.
Tip
The promoters you identify might also be great candidates for your champions
network.
To effectively manage change, you should identify and engage detractors early in the
process. That way, you can mitigate the potential negative impact they have.
Furthermore, if you address their concerns, you might convert these detractors into
promoters, helping your adoption efforts.
Tip
A common source of detractors is content owners for solutions that are going to be
modified or replaced. The change can sometimes threaten these content owners,
who are incentivized to resist the change in the hope that their solution will remain
in use. In this case, identify these content owners early and involve them in the
change. Giving these individuals a sense of ownership of the implementation will
help them embrace, and even advocate in favor, of the change.
Questions to ask
Use questions like those found below to assess change management.
Maturity levels
The following maturity levels will help you assess your current state of change
management, as it relates to data and BI initiatives.
Level State of change management
100: Initial • Change is usually reactive, and it's also poorly communicated and
communicated.
• No clear teams or roles are responsible for managing change for data initiatives.
200: • Executive leadership and decision makers recognize the need for change
Repeatable management in BI projects and initiatives.
• Some efforts are taken to plan or communicate change, but they're inconsistent
and often reactive. Resistance to change is still common. Change often disrupts
existing processes and tools.
300: • Formal change management plans or roles are in place. These plans include
Defined communication tactics and training, but they're not consistently or reliably
followed. Change occasionally disrupts existing processes and tools.
Next steps
In the next article in the Power BI adoption roadmap series, in conclusion, learn about
adoption-related resources that you might find valuable.
Power BI adoption roadmap conclusion
Article • 02/27/2023
7 Note
This article forms part of the Power BI adoption roadmap series of articles. This
series focuses primarily on the Power BI workload within Microsoft Fabric. Most of
the guidance in these articles is applicable more broadly to Microsoft Fabric. For an
overview of the series, see Power BI adoption roadmap.
This article concludes the series on Power BI adoption. The strategic and tactical
considerations and action items presented in this series will assist you in your Power BI
adoption efforts, and with creating a productive data culture in your organization.
Adoption overview
Adoption maturity levels
Data culture
Executive sponsorship
Content ownership and management
Content delivery scope
Center of Excellence
Governance
Mentoring and enablement
Community of practice
User support
System oversight
The rest of this article includes suggested next actions to take. It also includes other
adoption-related resources that you might find valuable.
1. Learn: First, read this series of articles end-to-end. Become familiar with the
strategic and tactical considerations and action items that directly lead to
successful Power BI adoption. They'll help you to build a data culture in your
organization. Discuss the concepts with your colleagues.
2. Assess current state: For each area of the adoption roadmap, assess your current
state. Document your findings. Your goal is to have full clarity on where you're now
so that you can make informed decisions about what to do next.
3. Clarify strategy: Ensure that you're clear on what your organization's goals are for
adopting Power BI. Confirm that your Power BI adoption goals align with your
organization's broader strategic goals for the use of data, analytics, and business
intelligence in general. Focus on what your immediate strategy is for the next 3-12
months.
4. Identify future state: For each area of the roadmap, identify the gaps between
what you want to happen (your future state) and what's happening (your current
state). Focus on the next 3-12 months for identifying your desired future state.
5. Customize maturity levels: Using the information you have on your strategy and
future state, customize the maturity levels for each area of the roadmap. Update or
delete the description for each maturity level so that they're realistic, based on
your goals and strategy. Your current state, priorities, staffing, and funding will
influence the time and effort it will take to advance to the higher maturity levels.
6. Prioritize: Clarify what's most important to achieve in the next 3-12 months. For
instance, you might identify specific user enablement or risk reduction items that
are a higher priority than other items. Determine which advancements in maturity
levels you should prioritize first.
7. Define measurable goals: Create KPIs (key performance indicators) or OKRs
(objectives and key results) to define specific goals. Ensure that the goals are
measurable and achievable. Confirm that each goal aligns with your strategy and
desired future state.
8. Create action items: Add specific action items to your project plan. Action items
will identify who will do what, and when. Include short, medium, and longer-term
(backlog) items in your project plan to make it easy to track and reprioritize.
9. Track action items: Use your preferred project planning software to track
continual, incremental progress of your action items. Summarize progress and
status every quarter for your executive sponsor.
10. Adjust: As new information becomes available—and as priorities change—
reevaluate and adjust your focus. Reexamine your strategy, goals, and action items
once a quarter so you're certain that you're focusing on the right actions.
11. Celebrate: Pause regularly to appreciate your progress. Celebrate your wins.
Reward and recognize people who take the initiative and help achieve your goals.
Encourage healthy partnerships between IT and the different areas of the business.
12. Repeat: Continue learning, experimenting, and adjusting as you progress with your
Power BI implementation. Use feedback loops to continually learn from everyone
in the organization. Ensure that continual, gradual, improvement is a priority.
A few important key points are implied within the previous suggestions.
Focus on the near term: Although it's important to have an eye on the big picture,
we recommend that you focus primarily on the next quarter, next semester, and
next year. It's easier to assess, plan, and act when you focus on the near term.
Progress will be incremental: Changes that happen every day, every week, and
every month add up over time. It's easy to become discouraged and sense a lack
of progress when you're working on a large adoption initiative that takes time. If
you keep track of your incremental progress, you'll be surprised at how much you
can accomplish over the course of a year.
Changes will continually happen: Be prepared to reconsider decisions that you
make, perhaps every quarter. It's easier to cope with continual change when you
expect the plan to change.
Everything correlates together: As you progress through each of the steps listed
above, it's important that everything's correlated from the high-level strategic
organizational objectives, all the way down to more detailed action items. That
way, you'll know that you're working on the right things.
The framework can augment this Power BI adoption roadmap series. The roadmap series
focuses on the why and what of adopting Power BI, more so than the how.
7 Note
When completed, the Power BI implementation planning series (described in the
previous section) will replace the Power BI adoption framework.
The whitepaper goes deeper into the what and how of adopting Power BI, with a strong
focus on technology. When you've finished reading the series of Power BI adoption
articles, the whitepaper will fill you in with extra information to help put your plans into
action.
7 Note
Microsoft's BI transformation
Consider reading about Microsoft's journey and experience with driving a data culture.
This article describes the importance of two terms: discipline at the core and flexibility at
the edge. It also shares Microsoft's views and experience about the importance of
establishing a COE.
The Power CAT Adoption Maturity Model , published by the Power CAT team,
describes repeatable patterns for successful Power Platform adoption.
The Power Platform Center of Excellence Starter Kit is a collection of components and
tools to help you develop a strategy for adopting and supporting Microsoft Power
Platform.
The Power Platform adoption best practices includes a helpful set of documentation and
best practices to help you align business and technical strategies.
The Maturity Model for Microsoft 365 provides information and resources to use
capabilities more fully and efficiently.
Microsoft Learn has a learning path for using the Microsoft service adoption
framework to drive adoption in your enterprise.
The Microsoft Cloud Adoption Framework for Azure is a collection of
documentation, implementation guidance, best practices, and tools to accelerate
your cloud adoption journey.
A wide variety of other adoption guides for individual technologies can be found online.
A few examples include:
Industry guidance
The Data Management Book of Knowledge (DMBOK2) is a book available for
purchase from DAMA International. It contains a wealth of information about maturing
your data management practices.
7 Note
The additional resources provided in this article aren't required to take advantage
of the guidance provided in this Power BI adoption series. They're reputable
resources should you wish to continue your journey.
Partner community
Experienced Power BI partners are available to help your organization succeed with
Power BI. To engage a Power BI partner, visit the Power BI partner portal .
Power BI implementation planning
Article • 08/31/2023
In this video, watch Matthew introduce you to the Power BI implementation planning
series of articles.
https://www.microsoft.com/en-us/videoplayer/embed/RWUWA9?postJsllMsg=true
Subject areas
When you implement Power BI, there are many subject areas to consider. The following
subject areas form part of the Power BI implementation planning series:
BI strategy
User tools and devices
Tenant setup
Subscriptions, licenses, and trials
Roles and responsibilities
Power BI service administration
Workspaces
Data management
Content deployment
Content distribution and sharing
Security
Information protection and data loss prevention
Power BI Premium
Gateways
Integration with other services
Auditing and monitoring
Adoption tracking
Scaling and growing
7 Note
The series is a work in progress. We will gradually release new and updated articles
over time.
Usage scenarios
The series includes usage scenarios that illustrate different ways that creators and
consumers can deploy and use Power BI:
Purpose
When completed, the series will:
Recommendations
To set yourself up for success, we recommend that you work through the following
steps:
1. Read the complete Power BI adoption roadmap, familiarizing yourself with each
roadmap subject area. Assess your current state of Power BI adoption, and gain
clarity on the data culture objectives for your organization.
2. Explore Power BI implementation planning articles that are relevant to you. Start
with the Power BI usage scenarios, which convey how you can use Power BI in
diverse ways. Be sure to understand which usage scenarios apply to your
organization, and by whom. Also, consider how these usage scenarios may
influence the implementation strategies you decide on.
3. Read the articles for each of the subject areas that are listed above. You might
choose to initially do a broad review of the contents from top to bottom. Or you
may choose to start with subject areas that are your highest priority. Carefully
review the key decisions and actions that are included for each topic (at the end of
each section). We recommend that you use them as a starting point for creating
and customizing your plan.
4. When necessary, refer to Power BI documentation for details on specific topics.
Target audience
The intended audience of this series of articles may be interested in the following
outcomes:
This series is certain to be helpful for organizations that are in their early stages of a
Power BI implementation or are planning an expanded implementation. It may also be
helpful for those who work in an organization with one or more of the following
characteristics:
Power BI has pockets of viral adoption and success in the organization, but it's not
consistently well-managed or purposefully governed.
Power BI is deployed with some meaningful scale, but there are many unrealized
opportunities for improvement.
Tip
Acknowledgments
The Power BI implementation planning articles are written by Melissa Coates, Kurt
Buhler, and Peter Myers. Matthew Roche, from the Fabric Customer Advisory Team,
provides strategic guidance and feedback to the subject matter experts.
Next steps
In the next article in this series, learn about usage scenarios that describe how you can
use Power BI in diverse ways.
Experienced Power BI partners are available to help your organization succeed with the
migration process. To engage a Power BI partner, visit the Power BI partner portal .
Power BI implementation planning: BI
strategy
Article • 09/11/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This article introduces the business intelligence (BI) strategy series of articles. The BI
strategy series is targeted at multiple audiences:
Defining your BI strategy is essential to get the most business value from BI initiatives
and solutions. Having a clearly defined BI strategy is important to ensure efforts are
aligned with organizational priorities. In some circumstances, it's particularly important.
We recommend that you pay special attention to these articles if your organization is:
Migrating to, or implementing, Microsoft Fabric or Power BI for the first time: A
clear BI strategy is crucial to the successful implementation of any new platform or
tool.
Experiencing significant growth of Microsoft Fabric or Power BI usage: A BI
strategy brings clarity and structure to organic adoption, helping to enable users
while mitigating risk.
Seeking to become data-driven or achieve digital transformation: A BI strategy is
critical to modernizing your organization and helping you to achieve a competitive
advantage.
Experiencing significant business or technological change: Planning your BI
strategy ensures that your organization can use change as momentum and not as
an obstacle.
Reevaluating your business strategy: Your business strategy should influence your
BI strategy, which in turn can lead to changes in your business strategy. All
strategies should be in alignment in order to achieve your organizational goals.
A BI strategy is a plan to implement, use, and manage data and analytics to better
enable your users to meet their business goals. An effective BI strategy ensures that data
and analytics support your business strategy.
The following diagram depicts how a BI strategy supports the business strategy by
enabling business users.
The diagram depicts the following concepts.
Item Description
The business strategy describes how the organization will achieve its business goals.
The business strategy directly informs the BI strategy. The primary purpose of the BI
strategy is to support—and potentially inform—the business strategy.
The BI strategy is a plan to implement, use, and manage data and analytics.
BI goals define how BI will support the business goals. BI goals describe the desired future
state of the BI environment.
To make progress toward your BI goals, you identify and describe BI objectives that you
want to achieve in a specific time period. These objectives describe paths to your desired
future state.
To achieve your BI objectives, you plan and implement BI solutions and initiatives. A
solution might be developed by a central IT or BI team, or by a member of the community
of practice as a self-service solution.
The purpose of BI solutions and initiatives is to enable business users to achieve their
objectives.
Business users use BI solutions and initiatives to make informed decisions that lead to
effective actions.
Business users follow through on the business strategy with their achieved results. They
achieve these results by taking the right actions at the right time, which is made possible
in part by an effective BI strategy.
7 Note
In this series, goals are high-level descriptions of what you want to achieve. In
contrast, objectives are specific, actionable targets to help you achieve a goal. While
a goal describes the desired future state, objectives describe the path to get there.
Further, solutions are processes or tools built to address specific business needs for
users. A solution can take many forms, such as a data pipeline, a data lakehouse, or
a Power BI dataset or report.
Area Example
Business The organizational goal is to improve customer satisfaction and reduce customer
strategy churn. One business strategy to achieve this goal is to reduce the number of late
customer deliveries.
BI strategy BI goal: To support the business strategy, the BI goal is to improve the effectiveness
of orders and deliveries reporting.
Business Enabled by these BI solutions, business users can more effectively identify and
users mitigate potential late deliveries. These solutions result in fewer late deliveries and
improved customer satisfaction, allowing the organization to achieve progress
toward its business goals.
The following diagram depicts how a BI strategy is a subset of a data strategy, and how
they share concepts related to data culture and technology.
The diagram depicts the following concepts.
Item Description
A data strategy describes the goals and priorities for the wider use and management of
data in an organization. A data strategy encompasses more than only BI.
Data culture is important in both a BI strategy and a data strategy. Different data culture
areas describe a vision for behaviors, values, and processes that enable people to work
effectively with data. An example of a data culture area is data literacy.
Technology is important in both a BI strategy and a data strategy. Different technical areas
support the business data needs and use cases. An example of a technical area is data
visualization.
A BI strategy can encompass many data culture and technical areas. However, when
planning your BI strategy, you should be cautious not to attempt to address too many
of these areas at first. A successful BI strategy starts small. It focuses on a few prioritized
areas and broadens scope over time, ensuring consistent progress. Later, as you
experience sustainable success with your BI strategy, it can incrementally evolve to
encompass more areas.
) Important
Defining a BI strategy
There are many ways to define a BI strategy. Typically, when you define a BI strategy,
you begin with the priority areas that describe your BI goals. Based on these prioritized
goals, you define specific, measurable objectives. To achieve these objectives, you build
solutions and enact specific initiatives. You then incrementally scale your BI strategy to
include more areas as you experience success.
You The following diagram depicts how you can define your BI strategy at three
different planning levels, as depicted in the following diagram.
Item Description
Strategic planning: You begin by defining your strategic BI goals and priorities, and how
they support your business strategy. These BI goals are high-level descriptions of what you
want to achieve and why.
Tactical planning: You then identify your specific BI objectives. These objectives are
specific, measurable, short-term actions that describe how you'll make progress toward
your long-term, strategic BI goals.
Item Description
Solution planning: The BI solutions and initiatives that you create should be a direct result
of tactical planning. These solutions enable you to achieve your BI objectives and make
progress toward your BI goals.
) Important
BI strategy example
The following high-level, hypothetical example explains how you can transition from
business goals to BI goals. It then explains how to transition from BI goals to objectives,
and then to BI solutions.
In this example, the organization has set a goal to increase sales effectiveness. One
strategy the business uses to achieve this goal is to sell more high-margin products to
its top customers.
Data literacy: Improve the ability of the salespeople to make decisions based on
data and report visualizations.
Content ownership: Clarify who owns the data and reporting items for different
use cases.
Mentoring and user enablement: More effectively enable the salespeople with the
skills and tools to answer questions with data.
Governance: More effectively balance governance risk and enablement of the sales
teams.
Data engineering: Create a unified view of sales and profitability data for analytics.
7 Note
In this example, many other factors might be important. However, the organization
has identified these particular priorities to advance progress towards their business
goals and strategy.
Objectives
To achieve their BI goals, the BI team conducts tactical planning to identify and describe
their short-term objectives. The BI team creates a data literacy program for the
salespeople. Also, the BI team drafts a user enablement plan and an accountability plan
for salespeople who want to perform self-service analytics. These plans allow the
salespeople to request access to data after they've completed specific training materials
and signed a self-service user acknowledgment.
Data literacy: Ensure that 90 percent of the salespeople complete the data literacy
program.
Content ownership: Adopt the managed self-service BI usage scenario, where
central teams manage central, certified Power BI datasets and reports. Some self-
service content creators can connect to these datasets for their own analysis and
reporting needs.
Mentoring and user enablement: Create a centralized portal to share training
resources, template files, and host weekly office hours Q&A sessions.
Governance: Draft a tenant-wide monitoring solution for user activities based on
data from the Power BI activity log, and identify data democratization and data
discovery priorities for the next quarter.
Data engineering: Design and start a proof of concept for a medallion lakehouse
architecture to store the sales and profitability data.
Data security: Define and implement data security rules for BI solutions.
Information protection and data loss prevention (DLP): Define how content
creators should endorse content by promoting or certifying data items. Conduct
an investigation into whether to use sensitivity labels and implement DLP policies.
BI solutions
To achieve its objectives, the organization aims to design and deploy the following BI
solutions.
Central BI teams will work to store profitability data for customers and products in
a unified lakehouse.
Central BI teams will publish an enterprise semantic model as a Power BI dataset
that includes all data required for central reporting and key self-service reporting
scenarios.
Security rules applied to the Power BI dataset enforce that salespeople can only
access data for their assigned customers.
Central BI teams will create central reports that show aggregate sales and
profitability across regions and product groups. These central reports will support
more sophisticated analysis by using interactive visualizations.
Salespeople can connect directly to the BI dataset to perform personal BI and
answer specific, one-off business questions.
7 Note
This example describes a simple scenario for the purpose of explaining the three
planning levels of a BI strategy. In reality, your strategic BI goals, objectives, and BI
solutions are likely to be more complex.
There are many ways to iteratively plan your BI strategy. A common approach is to
schedule planning revisions over periods that align with existing planning processes in
the organization.
The following diagram depicts recommendations for how you can schedule planning
revisions.
The diagram depicts how you can iteratively structure your BI strategy planning the
following concepts.
Item Description
Avoid detailed, long-term planning: Long-term, detailed plans can become outdated as
technology and business priorities change.
Strategic planning (every 12-18 months): This high-level planning focuses on aligning
business goals and BI goals. It's valuable to align this strategic planning with other
annualized business processes, like budgeting periods.
Tactical planning (every 1-3 months): Monthly or quarterly planning sessions focus on
evaluating and adjusting existing tactical plans. These sessions assess existing priorities
and tasks, which should take business feedback and changes in business or technology
into account.
Continuous improvement (every month): Monthly sessions focus on feedback and urgent
changes that impact ongoing planning. If necessary, decision makers can make decisions,
Item Description
The diagram shows three levels of BI strategy planning, which are each described in
separate articles. We recommend that you read these articles in the following order.
1. BI strategic planning: This article describes how you can form a working team to
lead the initiative to define your BI strategy. The working team prepares workshops
with key stakeholders to understand and document the business strategy. The
team then assesses the effectiveness of BI in supporting the business strategy. This
assessment helps to define the strategic BI goals and priorities. After strategic
planning, the working team proceeds to tactical planning.
2. BI tactical planning: This article describes how the working team can identify
specific objectives to achieve the BI goals. As part of these objectives, the working
team creates a prioritized backlog of BI solutions. The team then defines what
success of the BI strategy will look like, and how to measure it. Finally, the working
team commits to revise tactical planning every quarter. After tactical planning, you
proceed to solution planning.
3. BI solution planning: This article describes how you design and build BI solutions
that support the BI objectives. You first assemble a project team that's responsible
for a solution in the prioritized solution backlog. The project team first gathers
requirements to define the solution design. Next, it plans for deployment and
conducts a proof of concept (POC) of the solution to validate assumptions. If the
POC is successful, the project team creates and tests content with iterative cycles
that gradually onboard the user community. When ready, the project team deploys
the solution to production, supporting and monitoring it as needed.
Tip
Before you read the BI strategy articles, we recommend that you're already familiar
with the Power BI adoption roadmap. The adoption roadmap describes
considerations to achieve Power BI adoption and a healthy data culture. These BI
strategy articles build upon the adoption roadmap.
Next steps
In the next article in this series, learn about BI strategic planning.
Power BI implementation planning: BI
strategic planning
Article • 09/11/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This article helps you to define your business intelligence (BI) goals and priorities
through strategic planning. It's primarily targeted at:
BI and analytics directors or managers: Decision makers who are responsible for
overseeing the BI program and BI strategic planning.
Center of Excellence (COE), IT, and BI teams: The teams that are responsible for
tactical planning, and for measuring and monitoring progress toward the BI
objectives.
Subject matter experts (SMEs) and content owners and creators: The teams and
individuals that champion analytics in a team or department and conduct BI
solution planning. These teams and individuals are responsible for representing
the strategy and data needs of their business area when you define the BI strategy.
A BI strategy is a plan to implement, use, and manage data and analytics. As described
in the BI strategy overview article, your BI strategy is a subset of your data strategy. It
supports your business strategy by enabling business users to make decisions and take
actions by using data and BI solutions more effectively.
In short, this article describes how you can perform strategic planning to define the
goals and priorities of your BI strategy.
7 Note
You take the following steps to define your strategic BI goals and priorities.
Step Description
4 Use the assessments and stakeholder input to decide on the strategic BI goals and
priorities.
The following diagram depicts the following roles that appoint the working team
members, and the types of members in a working team.
Item Description
The executive sponsor typically provides top-down goals and support of the working
team, including funding. The executive sponsor may also appoint the working team
members together with the Center of Excellence (COE).
A COE or central BI team confers with the executive sponsor to identify and appoint
working team members. The COE may also provide guidance to the working team to
support their activities.
COE members form part of the working team. They're responsible for using their BI
expertise to drive BI information gathering and complete the current state assessments.
Business SMEs form part of the working team. They represent the interests of their
department or business unit. SMEs are responsible for driving business strategy
information gathering.
Functional team members, like those from a master data team, can form part of the
working team. They're responsible for clarifying strategically important processes during
information gathering.
Item Description
Technical team members, like those from a data engineering team, can form part of the
working team. They're responsible for clarifying strategically important systems during
information gathering.
Security team members form part of the working team. They're responsible for identifying
and clarifying compliance, security, and privacy requirements during information
gathering.
Other IT team members can form part of the working team. They may identify other
technical requirements related to areas such as networking or application management.
7 Note
Not all roles depicted in the diagram have to be present in the working team.
Involve roles that are relevant for the scope and scale of your BI strategy initiative.
) Important
Working team members are typically appointed and guided by an executive sponsor of
BI and analytics, like the Power BI executive sponsor. Identifying and engaging an
executive sponsor is the first step of a BI strategy initiative.
In addition to the many activities listed in the adoption roadmap, an executive sponsor
plays a key role in BI strategic planning by:
Supporting the working team and COE: The executive sponsor takes a leading
role in defining the BI strategy by providing top-down guidance and
reinforcement.
Allocating resources: They confirm funding, staffing, roles, and responsibilities for
the working team.
Advocating for the initiative: They advance the strategic initiative by:
Legitimizing working team activities, particularly when the working team faces
resistance to change.
Promoting the BI strategy initiative with announcements or public endorsement.
Motivating action and change to progress the BI strategy initiative.
Representing the working team and sharing the BI strategic plan among C-level
executives to obtain executive feedback.
Making strategic decisions: They make decisions about priorities, goals, and
desired outcomes.
Tip
Before assembling the working team, you should first identify and engage an
executive sponsor. Work through this checklist to ensure that you take the
necessary actions to ensure a sufficiently engaged executive sponsor.
As part of the scoping exercise, you should also plan how you'll set expectations with
stakeholders that the BI strategy will be defined iteratively.
Planning and preparation: The working team should plan and prepare the various
aspects of the BI strategy initiative, such as:
Defining the timelines, deliverables, and milestones for the initiative.
Identifying stakeholders who can accurately describe the business goals and
objectives of their respective departments.
Communication with stakeholders, the executive sponsor, and each other.
Information gathering: The working team should gather sufficient information to
accurately assess the current state of the BI implementation. Examples of
information gathering activities include:
Conducting independent research about the business context and existing BI
solutions or initiatives.
Running interactive workshops with stakeholders to elicit input about business
objectives and data needs.
Documenting summarized findings and sharing conclusions.
Feedback and follow-up: The working team summarizes the findings from the
information gathered and proposes BI goals, priorities, and next steps. It gathers
feedback and follows up by:
Assessing the current state of BI adoption and implementation.
Creating a prioritized list of business data needs.
Presenting their conclusions and proposed next steps to stakeholders and
executive leadership.
7 Note
Because the working team communicates with stakeholders and business users,
they're often considered ambassadors for Fabric, Power BI, and other BI initiatives
in your organization. Ensure that your working team members understand and
accept this responsibility.
Some organizations don't have a COE, possibly because the role of a COE is performed
by their BI team or IT. In this case, consider adding members from the BI team or IT to
the working team.
7 Note
Ensure that the working team doesn't consist only of members from your COE,
central IT, or BI teams. A BI strategy encompasses many areas of the organization,
and each of these areas should be well represented.
Tip
If you don't have a COE, consider establishing one while assembling the working
team. Establishing a COE with the working team members could be a natural
evolution of the working team's activities, once the BI strategic vision is defined.
This approach is a good way to capture the knowledge and understanding that the
working team gained during the BI strategy initiative.
Working team members should include business SMEs. The main responsibility of
business SMEs is to represent their business unit. You should include business SMEs in
the working team to avoid assumptions that results in a BI vision that doesn't work for
part of the organization.
Business SMEs in the working team must have a deep understanding of data needs and
business processes within their business unit or department. Ideally, they should also
understand the BI tools and technologies used to address these needs.
7 Note
It may not be practical to include every department, business unit, or region in the
working team. In this case, ensure that you dedicate effort to identifying
assumptions and exceptions for any unrepresented departments, business units, or
regions.
Champions network
Working team members can include users from your existing champions network in the
community of practice. A champion typically has exceptional knowledge of both BI tools
and the data needs of their business area. They're often leaders in the community of
practice. The main responsibility of champions in the working team is to promote
information gathering, and to involve their community of practice in the initiative.
7 Note
Including champions can help to avoid making assumptions that can result in an
inaccurate assessment of the current state of Power BI adoption and
implementation.
Here are some examples of when you might include members from functional areas in
the working team.
The working team creates and manages different channels for each business area. The
separation by business area should correspond to the top-level structure of the initiative.
Each channel contains a repository of summarized findings, timelines, and discussions
about the BI strategy.
Designated working team members curate and moderate the communication hub.
Moderation ensures that the communication hub remains useful and current.
Key stakeholders and working team members actively participate in the communication
hub.
The executive sponsor limits participation. For example, they might resolve disputes as
they arise.
Tip
We recommend that you use the communication hub beyond the strategic
planning workshops. Because the communication hub is a source of regular input
and discussion from key business stakeholders, it can help your team keep the BI
strategy relevant and up to date.
Here are some recommendations to get the most value from the communication hub.
Have well-defined working team responsibilities: Ensure that the working team
has well-defined responsibilities for the communication hub, such as curating and
moderating it. Having active and involved moderation ensures that the
communication hub remains current, informative, and useful for the working team
and key stakeholders.
Organize discussions and files: Ensure that it's easy to find files or previous
discussions in the communication hub by creating and maintaining a logical
structure. An organized communication hub encourages its effective use.
Be concise in documents and posts: Avoid overwhelming people with large
volumes of information. Key stakeholders have limited time, so encourage people
to publish posts and documents to the communication hub that are concise and
easy to understand.
Be consistent in communication: Ensure that the communication hub is used
instead of alternative channels, like email. You should also strive to ensure that
documents and updates are consistent in tone, format, and length.
Be transparent and foster a collaborative environment: An effective
communication hub has an active, collaborative social environment. It requires
transparency from the working team who should be sharing regular updates and
findings throughout the initiative.
) Important
Checklist - When establishing a working team, key decisions and actions include:
These activities are prerequisites for the workshops and complete assessments (step 3).
Business SMEs in the working team should use their expertise to lead the effort to
describe the business context. However, it's important that all members of the working
team participate. It's essential that the working team has a shared understanding of the
business strategy. That way, the BI strategy focuses on addressing business needs
instead of solving abstract, technical problems.
To define an effective BI strategy, the working team must also understand the current
state of BI adoption and implementation. The current state describes how people use
existing data and BI tools, and what data and tools are strategically important. You
should identify the existing BI initiatives and solutions with respect to the business
processes they support and objectives they address. These solutions help illustrate what
business users do today to address their data needs, so that you can assess whether it's
effective.
COE members in the working team should use their expertise to lead the effort to
describe the current state of BI adoption and implementation. An example of an activity
that helps this effort is tenant-level auditing. Auditing allows the working team to collect
an inventory of current BI initiatives and solutions to prepare for workshops.
) Important
Ensure that the working team has a good understanding of the compliance
requirements and information protection needed in your organization. Document
these requirements during independent research, and ensure that they're well
understood by everyone in the working team.
The working team researches the business context to document and understand the
business strategy. This research is led by business SMEs for their respective departments or
business units.
The working team researches the business context by first identifying the business goals.
The working team identifies specific business objectives that departments or business units
have to make progress toward with their goals.
Business processes are initiatives or plans created to work toward business objectives. The
working team identifies the processes in place to help achieve the business objectives.
Business data needs are the data, tools, and solutions required to support business
processes and strategic objectives. The working team identifies the business data needs.
The working team researches any existing BI initiatives and solutions to understand the
current state of BI adoption and implementation. COE members or BI experts lead this
research.
The working team investigates strategically important BI solutions to understand how the
organization currently addresses business data needs. Specifically, the working team
identifies who the business users are, how they use the solutions. The working team also
documents key data questions or problems that these solutions address, and also
potential flaws, opportunities, and inefficiencies.
The working team surveys and documents the existing tools and technologies that the
organization uses to address business data needs.
The working team identifies past or parallel initiatives to define the BI strategy. Past
initiatives might contain valuable learnings, while parallel initiatives may be combined to
avoid duplication of effort.
The working team identifies strategically important KPIs and master data. These KPIs and
master data are critical to enabling the business to achieve their business objectives.
The working team assesses the usage and adoption of strategically important BI solutions
among the user community.
The working team identifies any potential governance and compliance risks identified in
existing BI solutions.
) Important
The topics and examples presented in this section are intended to guide you in
conducting your own independent research. These topics aren't an exhaustive or
required list. Use these topics as inspiration. We recommend that you use the
maturity levels documented in the Power BI adoption roadmap to help you
evaluate and prioritize areas that are most important for your organization and its
business context.
Taken together, research on the business context and existing BI initiatives and solutions
describe the current state of BI adoption and implementation. The working team verifies
this research in workshops when capturing stakeholder input.
Plan workshops
While performing independent research, you should plan workshops with stakeholders.
The purpose of these workshops is to gather input about the business objectives and
data needs. You also validate the conclusions from independent research in these
workshops.
7 Note
This article uses the term workshops to describe interactive meetings with key
stakeholders. The objective of the workshops is to gather input so you can
accurately describe and understand the objectives and data needs.
The following sections describe key considerations for planning and preparing
workshops.
Identifying the right stakeholders is essential in order to run successful workshops and
gain an accurate understanding of the business areas in scope.
2 Warning
If you engage the wrong stakeholders, there's significant risk that the BI strategy
won't align with strategic business goals or support business users to achieve their
objectives.
The following diagram depicts the process to identify and inform the right key
stakeholders about the BI strategy initiative.
Item Description
List the functional areas (departments and business units) in scope for the BI strategy
initiative.
For each functional area, identify two to three candidate key stakeholder representatives.
Engage with stakeholders to inform them of the initiative, and validate their selection. At
this stage, candidate stakeholders may decline to participate and might suggest alternative
people.
The executive sponsor informs key stakeholders and formally requests their participation.
All further communication with the key stakeholders is posted to the communication hub.
When you initially request key stakeholder participation, ensure that you:
) Important
To mitigate this risk, ensure that you involve stakeholders from various levels of the
organization. When selecting key stakeholders, engage with different teams to
briefly introduce the initiative and gather input on who the right stakeholders
might be. This level of engagement not only raises awareness of the initiative, but it
also enables you to involve the right people more easily.
Checklist - When planning workshops and conducting research, key decisions and
actions include:
The working team combines the stakeholder input together with independent research.
These inputs should provide the working team with a sufficient understanding of the
business strategy and the current state of BI adoption and implementation.
With this understanding, the working team evaluates the maturity and effectiveness of
the current state of BI adoption and implementation. This evaluation is summarized in a
data culture assessment and a technical assessment, which are the key outputs of the
workshops. The objective of these assessments is to clearly identify both weaknesses
and opportunities in the data culture, and technical areas that should be prioritized for
the BI strategy.
) Important
Run workshops
The workshops are organized as a series of interactive sessions structured to effectively
elicit and collect information from stakeholders. The number of sessions and how
they're conducted will depend on the number of stakeholders, their location, time
availability, and other factors.
The following sections describe the types of sessions you typically conduct when
running workshops.
Introduction session
The introduction session is run by the working team, and it should involve all
stakeholders and the executive sponsor. It introduces the initiative and clarifies the
scope, objectives, timeline, and deliverables.
The objective of this session is to set expectations about the purpose of the workshops
and what's needed for the BI initiative to succeed.
Workshops
Workshops are interactive meetings between a few members of the working team and
the key stakeholders. A member of the working team moderates the discussion, posing
questions to stakeholders to elicit input. Stakeholders provide input about their business
strategy, existing BI initiatives and solutions, and their data needs.
7 Note
Here are some practical considerations to help you plan and organize effective
workshops.
Keep workshop attendance focused: Don't saturate meetings with too many
attendees. Involving too many people may result in prolonged discussions, or
discussions where only the most assertive personalities provide input.
Keep the discussion focused: Take any debates, excessively specific questions, or
remarks offline to discuss later in short one-on-one meetings. Similarly, identify
and address any resistance directly, and involve the executive sponsor whenever
necessary. Keeping the discussion focused ensures that workshops concentrate on
the overall discussion of strategic planning, and they don't get distracted by small
details.
Be flexible with preparation: Depending on time and preference, you can use
prepared material to conduct more effective discussion. However, understand that
discussions may go in unexpected directions. If a session departs from your
prepared material but still produces helpful input, don't force the discussion back
to a fixed agenda. When stakeholders are focused on a different point, it means
that it's important. Be flexible by addressing these points to capture the most
valuable input.
Document stakeholder input: During the workshops, you should document
stakeholders' inputs about their business objectives and the BI strategy.
Document business data needs: One outcome of workshop information gathering
is a high-level list of the unmet business data needs. You should first organize the
list from the highest to lowest priority. Determine these priorities based on
stakeholder input, and the impact the list items have on business effectiveness.
7 Note
The list of prioritized data needs is a key outcome of strategic planning that later
facilitates tactical planning and solution planning.
Complete assessments
The working team should combine independent research and the stakeholder input into
summarized findings. These objective findings should convey an accurate description of
the current state of BI adoption and implementation (for conciseness, referred to as the
current state). For each business area in scope, these findings should describe:
Business goals.
Business objectives to make progress towards their goals.
Business processes and initiatives to achieve their objectives.
Business data needs to support the processes and initiatives.
BI tools and solutions that people use to address their business data needs.
How people use the tools and solutions, and any challenges that prevent them
from using the tools and solutions effectively.
With an understanding of the current state, the working team should then proceed to
assess the overall BI maturity and its effectiveness in supporting the business strategy.
These assessments address specific data culture and technical areas. They also help you
to define your priorities by identifying weaknesses and opportunities that you'll
prioritize in your BI strategy. To address these weaknesses and opportunities, you define
high-level, strategic BI goals.
To help identify priorities, the working team conducts two types of assessment: a data
culture assessment and a technical assessment.
Contents of an assessment
Making a concise and accurate assessment of the current state is essential. Assessments
should highlight the strengths and challenges of the organization's ability to use data to
drive decisions and take actions.
Maturity level: Evaluate the overall maturity level on a five-point scale from 100
(initial) to 500 (efficient). The score represents a high-level, subjective assessment
by the working team of the effectiveness in different areas.
Business cases: Justify and illustrate the maturity level scores in the assessment.
Concrete examples include actions, tools, and processes taken by business users to
achieve their business objectives with data. The working team uses business cases
together with summarized findings to support their assessment. A business case
typically consists of:
A clear explanation of the desired outcome, and business data needs the
current process aims to address.
An as-is description of how the general process is currently done.
Challenges, risks, or inefficiencies in the current process.
Supplemental information: Support the conclusions, or documents significant
details that are relevant to the BI and business strategy. The working team
documents supplemental information to support later decision-making and tactical
planning.
The data culture assessment evaluates the current state of BI adoption. In order to
complete this assessment, the working team performs the following tasks.
1. Review summarized findings: The working team reviews the inputs collected from
conducting independent research and running workshops.
2. Evaluate the maturity levels: The working team proceeds through each of the data
culture areas described in this section. Using the Power BI adoption roadmap, they
evaluate the effectiveness of each area by assigning a maturity score.
3. Justify the subjective evaluation with objective evidence: The working team
describes several key business cases and supporting information that justifies their
evaluation of the maturity scores for each area.
4. Identify weaknesses and opportunities: The working team highlights or
documents specific findings that could reflect a particular strength or challenge in
the organization's data culture. It can be the lowest-scoring or highest-scoring
areas, or any areas that they feel have a high impact on the organization's data
culture. These key areas will be used to identify the BI goals and priorities.
Tip
Use the Power BI adoption roadmap to guide you when completing the data
culture assessment. Also, consider other factors specific to your organizational
culture and the ways your users work. If you're looking for more information,
consult other reputable sources like the Data Management Body of Knowledge
(DMBOK) .
The following diagram depicts how the working team assesses the organizational data
culture in BI strategic planning for specific data culture areas.
Item Description
Business alignment: How well the data culture and data strategy enable business users to
achieve business objectives.
Center of Excellence (COE): How effectively a central BI team enables the user community,
and whether this team has filled all the COE roles.
**Data literacy: How effectively users are able to read, interpret, and use data to form
opinions and make decisions.
Data discovery: How discoverable the right data is, at the right time, for the people who
need it.
Item Description
Data democratization: Whether data is put in the hands of users who are responsible for
solving business problems.
Content ownership and management: Whether there's a clear vision for centralized and
decentralized ways that content creators manage data (such as data models), and how
they're supported by the COE.
Content delivery scope: Whether there's a clear vision of who uses, or consumes,
analytical content (such as reports), and how they're supported by the COE.
Mentoring and user enablement: Whether end users have the resources and training to
effectively use data and improve their data literacy.
Community of practice: How effectively people with a common interest can interact with
and help each other on a voluntary basis.
User support: How effectively users can get help when data, tool, or process issues arise.
To evaluate these data culture areas, see the Power BI adoption roadmap. Specifically,
refer to the maturity level sections and Questions to ask sections, which guide you to
perform assessments.
The technical assessment evaluates technical areas that strategically enable the success
of BI implementation. The purpose of this assessment isn't to audit individual technical
solutions or assess the entirety of technical areas related to BI. Instead, the working
team describes the maturity level and general effectiveness for strategically critical areas,
like those described in this section. To complete this assessment, the working team
performs the following tasks.
1. Identify technical areas: The working team identifies specific technical areas that
are relevant and strategically important to the success of BI to include in their
assessment. Some examples of technical areas are described in this section and
show in the following diagram.
2. Define maturity levels: The working team defines the maturity levels to score the
high-level effectiveness for each technical area in the assessment. These maturity
levels should follow a consistent scale, such as those found in the template
provided in the maturity levels of the Power BI adoption roadmap.
3. Review summarized findings: The working team reviews the collected inputs by
conducting independent research and running workshops.
4. Evaluate the maturity levels: The working team evaluates the effectiveness of each
area by assigning a maturity score.
5. Justify the subjective evaluation with objective evidence: The working team
describes several key business cases and supporting information that justifies their
evaluation of the maturity scores for each area.
6. Identify weaknesses and opportunities: The working team highlights or
documents specific findings that could reflect a particular strength or challenge in
the organization's BI implementation. It can be the lowest-scoring technical areas,
or any areas that they feel have a high impact on the organization's strategic
success with implementing BI tools and processes. These key areas will be used to
identify the BI goals and priorities.
The following diagram depicts technical areas that you might assess when defining your
BI strategy.
7 Note
If you're adopting Microsoft Fabric, be aware that many of these areas are
represented as separate parts of the Fabric analytics platform.
Data integration: How effectively tools or systems connect to, ingest, and transform data
from various sources to create harmonized views for analytical purposes. Evaluating data
integration means equally assessing enterprise data pipelines and self-service data
integration solutions, like dataflows in Power BI and Fabric.
Data engineering: How effective the current architectures are at supporting analytical use
cases and adapting to changes in business data needs.
Data science: Whether the organization can use exploratory and sophisticated techniques
to discover new insights and benefit from predictive or prescriptive analytics.
Real-time analytics: Whether the organization can correctly identify, capture, and use low
latency data to provide an up-to-date picture of systems and processes.
Data visualization: Whether visualizations can be used effectively to reduce the time-to-
action of reporting experiences for business users. Effective visualizations follow best
practices, directing attention to important, actionable elements, enabling users to
investigate deeper or take the correct actions.
Actions and automation: How consistently and effectively tasks are automated and data
alerts are used to enable manual intervention at critical moments in a system or process.
You should also evaluate how actionable BI solutions are, meaning how effectively and
directly they enable report users to take the right actions at the right time.
Lifecycle management: How effectively content creators can collaborate to manage and
track changes in BI solutions for consistent, regular releases or updates.
Data security: Whether data assets comply with regulatory and organizational policies to
ensure that unauthorized people can't view, access, or share data. Data security is typically
evaluated together with information protection and data loss prevention.
Information protection: How well the organization mitigates risk by identifying and
classifying sensitive information by using tools like sensitivity labels. Information
protection is typically evaluated together with data security and data loss prevention.
Data loss prevention (DLP): Whether the organization can proactively prevent data from
leaving the organization. For example, by using DLP policies based on a sensitivity label or
sensitive information type. DLP is typically evaluated together with data security and
information protection.
Master data management :Whether quantitative fields and business attributes are
effectively managed, centrally documented, and uniformly maintained across the
organization.
Data quality: Whether BI solutions and data are trustworthy, complete, and accurate
according to the business user community.
Item Description
Artificial intelligence (AI): Whether the organization makes effective use of generative AI
tools and models to enhance productivity in BI processes. Additionally, whether AI is used
to deliver valuable insights in analytics workloads.
7 Note
The technical areas depicted in the diagram aren't all necessarily part of BI; instead
some are strategic enablers of a successful BI implementation. Further, these areas
don't represent an exhaustive list. Be sure to identify and assess the technical areas
that are strategically important for your organization.
U Caution
When performing the technical assessment, don't assess details beyond the scope
of strategic planning. Ensure that all activities that investigate the BI
implementation focus directly on defining and evaluating the current state to
define your BI goals and priorities.
Getting too detailed in the technical assessment risks diluting key messages about
the BI strategy. Always keep in mind the big picture questions like: Where do we
want to go? and How can BI effectively support the business?
Checklist - When running workshops and completing assessments, key decisions and
actions include:
" Decide and communicate the workshop format: Outline the number of sessions,
their length, participants, and other relevant details for participating stakeholders.
" Nominate a moderator from the working team: Decide who from the working
team will moderate the workshops. Their goal is to guide discussions and elicit
information.
" Collect input: Organize the workshops so that you collect sufficient input about the
business strategy and the current state of BI implementation and adoption.
" Summarize findings: Document the inputs that justify the assessments. Include
specific business cases that illustrate strategically important processes and
solutions.
" Complete the maturity assessments: Complete the relevant assessments for the
current state of BI adoption and implementation.
" Document business cases and supporting information: Objectively document the
evidence used to justify the maturity levels you assign in each assessment.
7 Note
While the working team should be involved in clarifying and documenting goals
and priorities, it isn't responsible for defining them. The executive sponsor and
equivalent decision makers own these decisions. The executive sponsor and other
decision makers have the authority to decide and allocate resources to deliver on
these goals and priorities.
For data culture areas, we recommend that you define your goals by using the
Power BI adoption roadmap. It can help you to identify the maturity level you
should aim to achieve for your desired future state. However, it's not realistic to aim
for a level 500 for each category. Instead, aim for an achievable maturity level
increase in the next planning period.
For technical areas, we recommend that you define your goals by using the maturity
scales described in the technical assessment by the working team.
Before you conclude strategic planning, the working team should align the decided BI
goals and priorities with stakeholders and executives.
The following sections describe how you align with stakeholders and executives.
The alignment session is the final meeting for each business area. Each alignment
session involves key stakeholders and the executive sponsor, who review the
assessments made by the working team.
The objective of this session is to achieve consensus about the conclusions and
assessments, and the agreed BI goals and priorities.
7 Note
Ensure that stakeholders understand that the BI strategy isn't final and unchanging.
Emphasize that the BI strategy evolves alongside the business and technology.
Ideally, the same stakeholders will continue to take part in this iterative exercise.
The objective of this session is to obtain executive alignment and approval on the
outcomes of strategic planning and the next steps.
Checklist – When deciding BI goals and priorities, key decisions and actions include:
Next steps
In the next article in this series, learn how to conduct BI tactical planning.
Power BI implementation planning: BI
tactical planning
Article • 09/11/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This article helps you to identify your business intelligence (BI) objectives and form
actionable plans to achieve incremental progress toward your strategic BI goals. It's
primarily targeted at:
BI and analytics directors or managers: Decision makers who are responsible for
overseeing the BI program and BI strategic planning.
Center of Excellence (COE), IT, and BI teams: The teams that are responsible for
tactical planning, and for measuring and monitoring progress toward the BI
objectives.
Subject matter experts (SMEs) and content owners and creators: The teams and
individuals that champion analytics in a department and conduct BI solution
planning.
A BI strategy is a plan to implement, use, and manage data and analytics. You define
your BI strategy by starting with BI strategic planning. In strategic planning, you
assemble a working team to research and assess the current state of BI adoption and
implementation to identify your BI goals and priorities. To work toward your BI goals,
the working team defines specific objectives by doing tactical planning. Tactical planning
is based on the assessments done and the BI goals and priorities decided during BI
strategic planning.
In short, this article describes how the working team can perform tactical planning to
define objectives and success for the BI strategy. It also describes how the working team
should prepare to iteratively reevaluate and assess this planning.
7 Note
Step Description
1 Identify and describe specific, actionable objectives for your BI goals and priorities.
2 Define what success will look like, and how you'll measure progress toward your desired
outcomes.
Tip
Refer to your assessments from BI strategic planning when you identify and
describe your objectives. Be sure to focus on the key weaknesses to improve, and
the opportunities to leverage.
To start, we recommend that you first address time-sensitive, quick-win, and high-
impact objectives.
) Important
When you identify objectives, also consider how you can objectively evaluate and
measure their impact. It's critical that you accurately describe the (potential) return
on investment (ROI) for BI initiatives in order to attain sufficient executive support
and resources. You can assess this impact together with your measures of success
for your BI strategy.
Quick wins may also be high-impact objectives. In this case, they're initiatives or
solutions that have the potential to make substantial advancements across many areas
of the business. Typically, identifying high-impact objectives is essential to progress
further in your BI strategy because they can prompt other, downstream objectives.
Minor changes that improve existing solutions for a large number of end users.
Solution audits and optimizations that improve performance and reduce capacity
usage and costs.
Training initiatives for key users.
Setting up a centralized portal to consolidate a user community of practice.
Creating shared, central themes, templates, and design guidelines for reports.
Tip
Refer to the relevant sections of the Power BI adoption roadmap and the Power BI
implementation planning to help you identify and describe your objectives.
) Important
Adoption
First, identify your adoption objectives. These objectives can address many areas, but
typically describe the actions you'll take to improve overall organizational adoption and
data culture.
Governance
Next, identify your governance objectives. These objectives should describe how you'll
sustainably enable users to answer business problems with data, while mitigating risks
to data security or compliance. These governance objectives should be motivated by,
and closely tied to, your adoption objectives.
) Important
If you don't have an effective process to monitor user activities and content, you
should make it one of your highest governance priorities. An understanding of
these activities and items informs better governance decisions and actions.
Implementation
Finally, identify your implementation objectives. These objectives have two purposes.
They:
Support adoption and governance objectives: Describe the solutions you build
and initiatives you enact to achieve your adoption and governance objectives.
These solutions help you work toward improving organizational adoption and user
adoption.
Support business data needs: Describe specific solutions you'll build to address
the prioritized needs that the working team described in BI strategic planning.
With these solutions, you should aim to achieve or improve solution adoption.
Initiatives: Processes, training resources, and policies that support other objectives.
Initiatives are typically non-technical instruments that support users or processes.
Examples of initiatives include:
Processes for self-service content creators so that they can request access to
tools, data, or training.
Governance data policies that describe how certain data should be accessed
and used.
A curated, moderated centralized portal for the user community of practice.
Solutions: Processes or tools built to directly address specific business problems or
data needs for users. Examples of solutions include:
An actionable monitoring solution that allows governance teams to follow up
on governance and adoption objectives.
A unified data lakehouse that delivers business-ready data for consumption by
content creators planning other downstream analytical solutions.
A Power BI app that addresses specific business data needs for content
consumers.
When curating this backlog for your implementation objectives, consider the following
points.
) Important
While your implementation objectives aim to address the business data needs, it's
unlikely you'll be able to address all of these needs immediately. Ensure that you
plan to mitigate the potential impact of unmet business data needs that you won't
address now. Try to assess the impact of these data needs and plan to either
partially address them with quick-wins or even stopgap solutions to at least
temporarily alleviate the business impact.
Identify obstacles
For each of your objectives, identify any obstacles or dependencies that could hinder
success or block progress. When you identify obstacles, describe how they could affect
your objectives. Define any relevant timelines, what actions could remove these
obstacles, and who should perform these actions. You should also assess the risk of
possible future obstacles that could prevent you from achieving your objectives.
To appraise the skills and knowledge of teams for organizational readiness, ask yourself
the following questions.
Do central teams (like the COE) understand the objectives, and how they relate to
the high-level strategic goals?
Are special training programs needed for topics like security, compliance, and
privacy?
What new tools or processes require user training? Who will organize this training?
) Important
After you've favorably assessed organizational readiness, you should proceed with step
2 of tactical planning to define success and how it's measured.
Checklist - When identifying your BI objectives, key decisions and actions include:
" Review BI goals and priorities: Ensure that your BI goals are current, and that
they're understood by everyone who participates in tactical planning.
" Review the current state assessments: The weaknesses and opportunities that the
working team identified in the current state assessments directly inform your
objectives.
" Identify time-sensitive objectives: Identify any objectives that have a defined time
period. Clarify the deadline and its impact on the priority of each objective.
" Identify quick-win objectives: Identify objectives that require low effort or time
investment to achieve. Justify why these are quick-win objectives.
" Identify high-impact objectives: Identify objectives that have a significant impact
on your BI strategy. Define why these objectives have a high impact.
" Identify adoption objectives: Identify objectives that will help you realize your data
culture vision and achieve the BI goals for organizational adoption.
" Identify governance objectives: Identify objectives that will help you balance user
enablement and risk mitigation.
" Identify implementation objectives: Identify objectives to either support defined
adoption and governance objectives or specific business data needs. Classify
implementation objectives as either initiatives or solutions.
" Curate the prioritized solution backlog: Create a prioritized list of BI solutions that
you'll implement this quarter. (You will work through this backlog in BI solution
planning.)
" Assess organizational readiness: Evaluate whether the organization is capable of
achieving the objectives you identified and described—and if not, whether you
need to change objectives or perform specific actions to improve organizational
readiness.
There are two ways to track measurable achievement. Some organizations use KPIs (Key
Performance Indicators), while others use OKRs (Objective Key Results). Both approaches
are equally valid.
KPIs and OKRs provide measurable success criteria that you monitor to take corrective
or proactive actions when there's significant deviation from your objectives. What's most
important is that you find an approach to measure progress toward your objectives that
works for your teams and your organization.
7 Note
Your measures of success should be closely aligned with business objectives. Ensure
that your success criteria aren't specific to technical tasks or implementations.
Instead, they should focus on better enabling business users to work toward
organizational goals.
U Caution
Measure a limited number of KPIs or OKRs. These metrics are only useful when you
know what they measure and how you should act upon them. It's better to have a
few strategic, valuable KPIs or OKRs than many metrics, which you don't regularly
monitor or follow up.
You should identify and describe indicators, such as KPIs or OKRs, for your objectives. To
this end, you should first have a clear understanding of the hierarchical relationship
between your BI goals, objectives, and the KPIs or OKRs you want to measure.
Here are some examples of BI goals together with related objectives and the KPIs to
track them.
Improve data- • Create a data literacy training • Number of users trained in the data
driven decision program to improve the data literacy program: Measures how many
making in the competences of the user users have completed data literacy
user community community. training and have achieved a passing
score.
• Create organizational design
standards, templates, and theme • Time-to-insight: Uses controlled trials
files for Power BI reports—adopt to measure how long it takes a random
these standards in business- sample of users to correctly answer
critical reporting solutions. typical business questions from
available datasets and reports—a fast
• Hold weekly office hours events (low) time-to-insight indicates effective
to allow users to ask questions data-driven decision making.
about central reports, or request
guidance for their decentralized
self-service BI solutions.
) Important
Ensure that your chosen KPIs or OKRs genuinely reflect your desired outcomes.
Regularly evaluate these indicators to avoid incentivizing counterproductive
behaviors. Consider Goodhart's Law, which states: When a measure becomes a
target, it ceases to be a good measure.
Once you implement relevant indicators to measure progress toward your BI objectives,
you should regularly monitor them to track progress and take action where necessary.
Here are some key decisions and considerations to help you successfully use KPIs or
OKRs.
Report your KPIs or OKRs: Create reporting solutions for your indicators that let
you effectively monitor them. Ensure that these reports are highly visible for the
relevant teams and individuals who need this information. In the reports,
communicate how the metric is calculated and which strategic objective it
supports.
Automate data collection: Ensure that data for KPIs and OKRs aren't collected
manually. Find efficient ways to streamline and automate the collection of the data
so that it's current, accurate, and reliable.
Track change: Visualize the current indicator value, but also the trend over time.
Progress is best demonstrated as a gradual improvement. If the indicator exhibits
high volatility or variance, consider using a moving average to better illustrate the
trend.
Assign an owner: Ensure that a team or individual is responsible for measuring the
indicator and keeping its data current.
Define an acceptable range: Establish targets or an acceptable range of values to
assign status (like on track or off track) to the indicator. When values fall outside
the target or range, it should prompt someone to investigate or take corrective
action.
Set up data-driven alerts: Set up automated alerts that notify key teams or
individuals, for example, by using Power Automate. That way, timely action can be
taken when the indicator is off track.
Define actions and interventions: Clearly describe how you'll use this information
to take action, either to address issues or to justify moving to the next step in your
BI strategy.
Checklist - When considering your desired future state, key decisions and actions
include:
" Define success for your BI strategy: Clearly describe the success criteria for each of
your objectives.
" Identify measures of success: For each objective, identify how to measure progress.
Ensure that these measures can be reliably tracked and that they'll effectively
encourage the behaviors that you expect.
" Create KPIs or OKRs to measure progress toward critical goals: Create indicators
that objectively report progress toward your strategic BI goals. Ensure that the
indicators are well documented and have clear owners.
" Create monitoring solutions: Create solutions that automatically collect data for
KPI or OKR reporting. Set up data alerts for when the KPIs or OKRs exceed
thresholds or fall outside of an acceptable range. Agree upon the necessary action
to take when these metrics get off track, and by whom.
" Identify obstacles to success: Ensure that key risks and obstacles to success are
identified and define how you'll address them.
" Evaluate organizational readiness: Assess how prepared the organization is to
adopt and implement the BI strategy and enact your tactical plan.
" Plan to address gaps in skills, knowledge, and executive support: Ensure that gaps
are addressed in objectives and tactical planning.
" Validate tactical planning with executives: Request that the executive sponsor
present an executive summary of the tactical planning to leadership for their input.
" Validate tactical planning with key stakeholders: Obtain feedback from key
stakeholders about the final tactical plan and its objectives.
We recommend that you conduct tactical planning at regular intervals with evaluation
and assessment, as depicted in the following diagram.
The diagram depicts how you can iteratively revise the BI strategy to achieve
incremental progress.
Item Description
BI strategic planning: Define and reassess your BI goals and priorities every 12-18 months.
In between BI strategic planning sessions, strive for incremental progress toward your BI
goals by achieving your BI objectives defined in tactical planning. Additionally, in between
strategic planning, you should collect feedback to inform future strategic decision-making.
BI tactical planning: Identify and reevaluate your BI objectives every 1-3 months. In
between, you implement these tactical plans by building BI solutions and launching BI
initiatives. Additionally, in between tactical planning, you should collect feedback and
monitor your KPIs or OKRs to inform future tactical decision-making.
Future objectives and priorities defined in your strategic and tactical planning are
informed by using regular feedback and evaluation mechanisms, such as those
described in the following sections.
Business objectives regularly change, resulting in new business data needs and changing
requirements. For this reason, your tactical planning must be flexible and remain well
aligned with the business strategy. To enable this alignment, you can:
Tip
Here are some examples of technological changes that can affect your tactical planning.
Follow updates: Keep current with new developments and features in Microsoft
Fabric. Read the monthly community blog posts and keep pace with
announcements at conference events.
Document key changes: Ensure that any impactful changes are included in your
tactical planning, and include relevant references. Call attention to any changes
that have a direct or urgent impact on business data needs or BI objectives.
Decide how to handle features in preview: Clarify how you'll use new preview
features that aren't yet generally available. Identify any preview features or tools
that have a strategic impact in your organization or help you achieve strategic
objectives. Consider how you'll benefit from these preview features while
identifying and mitigating any potential risks or limitations.
Decide how to handle new third-party and community tools: Clarify your policy
about third-party and community tools. If these tools are allowed, describe a
process to identify new tools that have a strategic impact in your organization or
help you achieve strategic objectives. Consider how you'll benefit from these tools
while identifying and mitigating any potential risks or limitations.
Checklist - When planning to revise your strategic and tactical planning, key decisions
and actions include:
Next steps
In the next article in this series, learn how to conduct BI solution planning.
Power BI implementation planning: BI
solution planning
Article • 09/11/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This article helps you to plan solutions that support your business intelligence (BI)
strategy. It's primarily targeted at:
BI and analytics directors or managers: Decision makers who are responsible for
overseeing the BI program and strategically important BI solutions.
Center of Excellence (COE), IT, and BI teams: The teams that design and deploy
enterprise BI solutions for their organization.
Subject matter experts (SMEs) and content owners and creators: The teams and
individuals that champion analytics in a department and design and deploy
solutions for self-service, departmental BI, or team BI usage scenarios.
A BI strategy is a plan to implement, use, and manage data and analytics. You define
your BI strategy by starting with BI strategic planning. Strategic planning helps you to
identify your BI goals and priorities. To determine the path to progress toward your BI
goals, you describe specific objectives by using tactical planning. You then achieve
progress toward your BI objectives by planning and deploying BI solutions.
7 Note
There are many approaches to plan and implement BI solutions. This article describes
one approach that you can take to plan and implement BI solutions that support your BI
strategy.
Step Description
1 Assemble a project team that gathers requirements and defines the design of the solution.
2 Plan for solution deployment by performing initial setup of tools and processes.
3 Conduct a solution proof of concept (POC) to validate assumptions about the design.
4 Create and validate content by using iterative development and validation cycles.
5 Deploy, support, and monitor the solution after it's released to the production
environment.
7 Note
For more information, see the Power BI migration series. While the series is concerned
with migration, the key actions and considerations are relevant to solution planning.
Note: Strategic and tactical planning is led by a working team, which leads the initiative.
In contrast, solution planning is led by a project team, which consists of content owners
and creators.
Gathering the right requirements is critical to achieve successful solution deployment
and adoption. An effective way to gather requirements is to identify and involve the
right stakeholders, collaboratively define the problem to be solved, and use that shared
understanding of the problem to create a solution design.
Here are some benefits from using a collaborative approach to gather requirements.
You can take different approaches to engage users and gather requirements. For
example, you can gather requirements with business design and technical design
(described in detail in later sections of this article).
) Important
7 Note
Identify who will conduct solution planning: As part of the BI tactical planning,
the working team created a prioritized backlog of solutions. In solution planning, a
project team is responsible for designing, developing, and deploying one or more
solutions from the backlog. For each solution in the backlog, you should assemble
a project team that will be responsible for the solution. In addition to running BI
solution planning, the project team should:
Define timelines and milestones for solution planning.
Identify and involve the right stakeholders for requirements gathering.
Set up a centralized location for communication, documentation, and planning.
Engage stakeholders to gather requirements.
Communicate and coordinate with stakeholders and business users.
Orchestrate iterative development and testing cycles with business users.
Document the solution.
Onboard users to the solution by creating and enacting a training plan.
Provide post-deployment solution support.
Address user requests to change or update the solution after deployment.
Conduct solution handover after deployment, if necessary.
Centralize communication and documentation: It's important that the project
team centralizes communication and documentation for BI solution planning. For
example, the project team should centralize requirements, stakeholder
communication, timelines, and deliverables. Consider storing all documentation in
a centralized portal.
Plan requirements gathering: The project team should begin by planning the
business design sessions to gather business requirements. These sessions take the
form of interactive meetings, and they can follow a similar format to the strategic
planning workshops.
Tip
Consider identifying and involving the support teams responsible for the solution
early in the requirements gathering process. To effectively support the solution, the
support teams will need a comprehensive understanding of the solution, its
purpose, and the users. That's particularly important when the project team is
comprised only of external consultants.
The following diagram depicts how to gather business requirements and define the
solution design by using a business design approach.
The diagram depicts the following steps.
Item Description
The project team begins the business design by confirming the solution scope that was
first documented in tactical planning. They should clarify the business areas, systems, and
data covered by the solution.
The project team identifies key stakeholders from the user community who will be
involved in the business design sessions. Key stakeholders are users with sufficient
knowledge and credibility to represent the subject areas of the solution.
The project team plans business design sessions. Planning involves informing stakeholders,
organizing meetings, preparing deliverables, and engaging with business users.
The project team gathers and researches existing solutions that business users currently
use to address existing business data needs. To accelerate this process, the project team
can use relevant research from BI strategic planning, which has been documented in the
communication hub.
The project team runs business design sessions with stakeholders. These sessions are
small, interactive meetings, where the project team guides stakeholders to understand
business data needs and requirements.
The project team concludes the business design by presenting a draft solution design to
stakeholders and other users for feedback and approval. The business design is successful
when the stakeholders agree that the design will help them achieve their business
objectives.
The business design deliverables are used in, and validated by, the technical design.
Tip
7 Note
Security and networking teams: Responsible for ensuring security and compliance
of the data.
Functional teams and data stewards: Responsible for curating the source data.
Architects: Owners of specific platforms, tools, or technology.
The project team engages stakeholders in technical design sessions to address technical
aspects of the solution. Technical aspects can include:
Data source connections: Details about how to connect to, and integrate, data
sources.
Networking and data gateways: Details about private networks or on-premises
data sources.
Field source mapping: Data mappings of business metrics and attributes to data
source fields.
Calculation logic: A translation of business definitions to technical calculations.
Technical features: Features or functionality needed to support business
requirements.
Tip
The project team who conducted the business design should also conduct the
technical design. However, for practical reasons, different individuals may lead the
technical design. In this case, begin the technical design by reviewing the outcomes
of the business design.
Ideally, the individuals who lead the technical design should have a thorough
understanding of the outcomes and the business users.
The following diagram depicts how to translate business requirements into technical
requirements by using a technical design.
The diagram depicts the following steps.
Item Description
The project team begins the technical design by defining the data source scope based on
the results of the business design. To identify the right data sources, the project team
consults with the business and functional SMEs.
The project team identifies technical or functional stakeholders to involve later in the
technical design sessions.
The project team plans limited, focused sessions with functional stakeholders to address
technical aspects of the solution. Planning involves informing stakeholders, organizing
meetings, and preparing deliverables.
The project team researches technical requirements. Research includes defining field
calculations and data source mappings, and also addressing the business design
assumptions with technical analysis and documentation.
If necessary, the project team involves stakeholders in technical design sessions. Sessions
focus on a specific, technical aspect of the solution, like security or data source
connections. In these sessions, the project team gathers qualitative feedback from
stakeholders and SMEs.
The project team prepares their findings by using a solution plan, which they present to
stakeholders and decision makers. The plan is an iteration and extension of the business
design outcomes that includes the final design, estimations, and other deliverables.
The technical design should conclude with a final meeting with stakeholders and decision
makers to decide whether or not to proceed. This meeting provides a final opportunity to
evaluate the solution planning before resources are committed to developing the solution.
7 Note
The technical design may reveal unexpected complexity that may make the solution
planning infeasible given the current resource availability or organizational
readiness. In this case, the solution should be reevaluated in the subsequent
tactical planning period. Depending on the urgency of the business data needs, a
decision maker, like the executive sponsor, may still want to proceed with a proof
of concept, or only one part of the planned solution.
The technical design concludes with a solution plan, which consists of the following
deliverables.
) Important
Ensure that the project team notifies stakeholders of any changes or unexpected
discoveries from the technical design. These technical design sessions should still
involve relevant business users. However, ensure that stakeholders aren't
unnecessarily exposed to complex technical information.
Tip
Because business objectives invariably evolve, it's expected that requirements will
change. Don't assume that requirements for BI projects are fixed. If you struggle
with changing requirements, it may be an indication that your requirements
gathering process isn't effective, or that your development workflows don't
sufficiently incorporate regular feedback.
" Clarify who owns solution planning: For each solution, ensure that roles and
responsibilities are clear for the project team.
" Clarify the solution scope: The solution scope should already be documented as
part of BI tactical planning. You may need to spend additional time and effort to
clarify the scope before you start solution planning.
" Identify and inform stakeholders: Identify stakeholders for business designs and
technical designs. Inform them in advance about the project and explain the scope,
objectives, required time investment, and deliverables from the business design.
" Plan and conduct business design sessions: Moderate the business design sessions
to elicit information from stakeholders and business users. Request that users
demonstrate how they use existing solutions.
" Document business metrics and attributes: By using existing solutions and input
from stakeholders, create a list of business metrics and attributes. In the technical
designs, map the fields to the data source and describe the calculation logic for
quantitative fields.
" Draft the solution design: Create iterative mock-ups based on stakeholder input
that visually reflect the expected solution result. Ensure that mock-ups accurately
represent and address the business requirements. Communicate to business users
that the mock-ups must still be validated (and possibly revised) during the technical
design.
" Create the solution plan: Research source data and relevant technical
considerations to ensure that the business design is achievable. Where relevant,
describe key risks and threats to the design, and any alternative approaches. If
necessary, prepare a revision of the solution design and discuss it with the
stakeholders.
" Create effort estimates: As part of the final solution plan, estimate the effort to
build and support the solution. Justify these estimates with the information
gathered during the business design and technical design sessions.
" Decide whether to proceed with the plan: To conclude the requirements gathering
process, present the final plan to stakeholders and decision makers. The purpose of
this meeting is to determine whether to proceed with solution development.
Step 2: Plan for deployment
When the project team finishes gathering requirements, creating the solution plan, and
receiving approval to proceed, it's ready to plan for solution deployment.
Initial tools and processes: Perform first-time setup for any new tools and
processes needed for development, testing, and deployment.
Identities and credentials: Create security groups and service principals that will
be used to access tools and systems. Effectively and securely store the credentials.
Data gateways: Deploy data gateways for on-premises data sources (enterprise
mode gateways) or data sources on a private network (virtual network, or VNet,
gateways).
Workspaces and repositories: Create and set up workspaces and remote
repositories for publishing and storing content.
7 Note
For more information about deployment planning, see Plan deployment to migrate to
Power BI.
Checklist - When planning solution deployment, key decisions and actions include:
" Plan for key areas: Plan to address the processes and tools that you need to
successfully develop and deploy your solution. Address both technical areas (like
data gateways or workspaces) and also adoption (like user training and
governance).
" Conduct initial setup: Establish the tools, processes, and features that you need to
develop and deploy the solution. Document the setup to help others who will need
to do a first-time setup in the future.
" Test data source connections: Validate that the appropriate components and
processes are in place to connect to the right data to start the proof of concept.
Step 3: Conduct a proof of concept
The project team conducts a solution proof of concept (POC) to validate outstanding
assumptions and to demonstrate early benefits for business users. A POC is an initial
design implementation that's limited in scope and maturity. A well-run POC is
particularly important for large or complex solutions because it can identify and address
complexities (or exceptions) that weren't detected in the technical design.
Objectives and scope: Describe the purpose of the solution POC and the
functional areas it will address. For example, the project team may decide to limit
the POC to a single functional area, or a specific set of requirements or features.
Source data: Identify what data will be used in the POC. Depending on the
solution, the project team may decide to use different types of data, such as:
Production (real) data
Sample data
Generated synthetic data that resembles actual data volumes and complexity
observed in production environments
Demonstration: Describe how and when the project team will demonstrate the
POC to stakeholders and users. Demonstrations may be given during regular
updates, or when the POC fulfills specific functional criteria.
Environment: Describe where the project team will build the POC. A good
approach is to use a distinct sandbox environment for the POC, and deploy it to a
development environment when it's ready. A sandbox environment has more
flexible policies and fluid content, and it's focused on producing quick results. In
contrast, a development environment follows more structured processes that
enable collaboration, and it focuses on completing specific tasks.
Success criteria: Define the threshold for when the POC is successful and should
move to the next iteration and enter formal development. Before starting the POC,
the project team should identify clear criteria for when the POC is successful. By
setting these criteria in advance, the project team defines when the POC
development ends and when iterative development and validation cycles begin.
Depending on the objectives of the POC, the project team may set different
success criteria, such as:
Approval of the POC by stakeholders
Validation of features or functionality
Favorable review of the POC by peers after a fixed development time
Failure: Ensure that the project team can identify failure of the POC. Identifying
failure early on will help to investigate root causes. It can also help to avoid further
investment in a solution that won't work as expected when it's deployed to
production.
U Caution
When the project team conducts the POC, they should remain alert for assumptions
and limitations. For example, the project team can't easily test solution
performance and data quality by using a small set of data. Additionally, ensure that
the scope and purpose of the POC is clear to the business users. Be sure to
communicate that the POC is a first iteration, and stress that it's not a production
solution.
7 Note
For more information, see Conduct proof of concept to migrate to Power BI.
" Define the objectives: Ensure that the objectives of the POC are clear to all people
who are involved.
" Define the scope of the POC: Ensure that creating the POC won't take too much
development effort, while still delivering value and demonstrating the solution
design.
" Decide what data will be used: Identify what source data you'll use to make the
POC, justifying your decision and outlining the potential risks and limitations.
" Decide when and how to demonstrate the POC: Plan to show progress by
presenting the POC to decision makers and business users.
" Clarify when the POC ends: Ensure that the project team decides on a clear
conclusion for the POC, and describe how it'll be promoted to formal development
cycles.
Tip
Iterative delivery encourages early validation and feedback that can mitigate
change requests, promote solution adoption, and realize benefits before the
production release.
Iterative development and validation cycles proceed until the project team arrives at a
predefined conclusion. Typically, development concludes when there are no more
features to implement or user feedback to address. When the development and
validation cycles conclude, the project team deploys the content to a production
environment with the final production release.
The following diagram depicts how the project team can iteratively deliver BI solutions
with development and validation cycles.
The diagram depicts the following steps.
Item Description
The project team communicates each release to the user community, describing changes
and new features. Ideally, communication includes a solution demonstration and Q&A, so
users understand what's new in the release, and they can provide verbal feedback.
During validation, users provide feedback via a central tool or form. The project team
should review feedback regularly to address issues, accept or reject requests, and inform
upcoming development phases.
The project team monitors usage of the solution to confirm that users are testing it. If
there isn't any usage, the project team should engage with the user community to
understand the reasons why. Low usage may indicate that the project team needs to take
further enablement and change management actions.
The project team promptly responds to user feedback. If the project team takes too long
to address feedback, users may quickly lose motivation to provide it.
The project team incorporates accepted feedback into the solution planning. If necessary,
they review the planning priorities to clarify and delegate tasks before the next
development phase begins.
The project team continues development of the solution for the next release.
The project team iterates through all steps until they reach a predefined conclusion, and
the solution is ready for production deployment.
The following sections describe key considerations for using iterative development and
validation cycles to deliver BI solutions.
Create content
The project team develops the solution by following their normal development
workflow. However, they should consider the following points when creating content.
Validate content
Each iterative development cycle should conclude with content validation. For BI
solutions, there are typically two kinds of validation.
Developer validation: Solution testing is done by content creators and peers. The
purpose of developer validation is to identify and resolve all critical and visible
issues before the solution is made available to business users. Issues can pertain to
data correctness, functionality, or the user experience. Ideally, content is validated
by a content creator who didn't develop it.
User validation: Solution testing is done by the user community. The purpose of
user validation is to provide feedback for later iterations, and to identify issues that
weren't found by developers. Formal user validation periods are typically referred
to as user acceptance testing (UAT).
) Important
Ensure that any data quality issues are addressed during developer validation
(before UAT). These issues can quickly erode trust in the solution, and they can
harm long-term adoption.
Tip
When conducting user validation, consider occasional, short calls with key users.
Observe them when they use the solution. Take notes about what they find difficult
to use, or what parts of the solution aren't working as expected. This approach can
be an effective way to collect feedback.
Factor in the following considerations when the project team validates content.
Encourage user feedback: With each release, request users provide feedback, and
demonstrate how they can effectively do so. Consider regularly sharing examples
of feedback and requests that have led to recent changes and new features. By
sharing examples, you're demonstrating that feedback is acknowledged and
valued.
Isolate larger requests: Some feedback items require more effort to address.
Ensure that the project team can identify these items and discuss whether they'll
be implemented, or not. Consider documenting larger requests to discuss in later
tactical planning sessions.
Begin change management activities: Train users how to use the solution. Be sure
to spend extra effort on new processes, new data, and different ways of working.
Investing in change management has a positive return on long-term solution
adoption.
When the solution reaches a predefined level of completeness and maturity, the project
team is ready to deploy it to production. After deployment, the project team transitions
from iterative delivery to supporting and monitoring the production solution.
7 Note
Development and testing differ depending on the solution and your preferred
workflow.
This article describes only high-level planning and actionable items. For more
information about iterative development and testing cycles, see Create content to
migrate to Power BI.
Checklist - When creating and validating content, key decisions and actions include:
" Use an iterative process to plan and assign tasks: Plan and assign tasks for each
release of the solution. Ensure that the process to plan and assign tasks is flexible
and incorporates user feedback.
" Set up content lifecycle management: Use tools and processes to streamline and
automate solution deployment and change management.
" Create a tool to centralize feedback: Automate feedback collection by using a
solution that's simple for you and your users. Create a straightforward form to
ensure that feedback is concise yet actionable.
" Schedule a meeting to review feedback: Meet to briefly review each new or
outstanding feedback item. Decide whether you'll implement the feedback or not,
who will be responsible for the implementation, and what actions to take to close
the feedback item.
" Decide when iterative delivery concludes: Describe the conditions for when the
iterative delivery cycles will conclude, and when you'll release content to the
production environment.
To ensure a successful deployment, you perform the following support and adoption
tasks.
U Caution
After deployment, the project team should plan to proceed to the next solution in the
prioritized solution backlog. Ensure that you collect any new feedback and requests and
make revisions to tactical planning—including the solution backlog—if necessary.
Checklist – When considering solution deployment, key decisions and actions include:
" Create a communication plan: Plan how to communicate the release, training, and
other solution support or adoption actions. Ensure that any outages or issues are
communicated and promptly addressed in the post-deployment support period.
" Follow through with a training plan: Train users to use the solution. Ensure that the
training includes both live and recorded training sessions for several weeks after
release.
" Conduct handover activities: If necessary, prepare a handover from the
development team to the support team.
" Conduct solution office hours: After the post-deployment support period, consider
holding regular office hours sessions to answer questions and collect feedback from
users.
" Set up a continuous improvement process: Schedule a monthly audit of the
solution to review potential changes or improvements over time. Centralize user
feedback and review feedback periodically between audits.
Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.
Power BI implementation planning:
Tenant setup
Article • 08/23/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This tenant setup article introduces important aspects to know about setting up your
Fabric tenant, with an emphasis on the Power BI experience. It's targeted at multiple
audiences:
Fabric is part of a larger Microsoft ecosystem. If your organization is already using other
cloud subscription services, such as Azure, Microsoft 365, or Dynamics 365, then Fabric
operates within the same Azure AD tenant. Your organizational domain (for example,
contoso.com) is associated with Azure AD. Like all Microsoft cloud services, your Fabric
tenant relies on your organization's Azure AD for identity and access management.
Tip
Azure AD tenant
Most organizations have one Azure AD tenant, so it's commonly true that an Azure AD
tenant represents an organization.
Tip
Unmanaged tenant
A managed tenant has a global administrator assigned within Azure AD. If an Azure AD
tenant doesn't exist for an organizational domain (for example, contoso.com), when the
first user from that organization signs up for a Fabric trial or account, an unmanaged
tenant is created in Azure AD. An unmanaged tenant is also known as a shadow tenant,
or a self-service-created tenant. It has a basic configuration, allowing the cloud service
to work without assigning a global administrator.
Tip
The administration of Azure AD is a broad and deep topic. We recommend that you
assign specific people in your IT department as system administrators to securely
manage Azure AD for your organization.
Checklist - When reviewing your Azure AD tenant for use with Fabric, key decisions and
actions include:
" Take over tenant: If applicable, initiate the process to take over an unmanaged
tenant.
" Confirm the Azure tenant is managed: Verify that your system administrators
actively manage your Azure AD tenant.
Every Azure AD tenant has a globally unique identifier (GUID) known as the tenant ID. In
Fabric, it's known as the customer tenant ID (CTID). The CTID is appended to the end of
the tenant URL. You can find the CTID in the Fabric portal by opening the About
Microsoft Fabric dialog window. It's available from the Help & Support (?) menu, which is
located at the top-right of the Fabric portal.
Knowing your CTID is important for Azure AD B2B scenarios. URLs that you provide to
external users (for example, to view a Power BI report) must append the CTID parameter
in order to access the correct tenant.
Checklist - When granting external users permission to view your content, or when you
have multiple tenants, key decisions and actions include:
" Include your CTID in relevant user documentation: Record the URL that appends
the tenant ID (CTID) in user documentation.
" Set up custom branding in Fabric: In the Fabric admin portal, set up custom
branding to help users identify the organizational tenant.
Azure AD administrators
Fabric administrators periodically need to work with the Azure AD administrators.
The following list includes some common reasons for collaboration between Fabric
administrators and Azure AD administrators.
Security groups: You'll need to create new security groups to properly manage the
Fabric tenant settings. You may also need new groups to secure workspace content
or for distributing content.
Security group ownership: You may want to assign a group owner to allow more
flexibility in who can manage a security group. For example, it could be more
efficient to permit the Center of Excellence (COE) to manage the memberships of
certain Fabric-specific groups.
Service principals: You may need to create an Azure AD app registration to
provision a service principal. Authenticating with a service principal is a
recommended practice when a Fabric administrator wants to run unattended,
scheduled scripts that extract data by using the admin APIs, or when embedding
content in an application.
External users: You'll need to understand how the settings for external (guest)
users are set up in Azure AD. There are several Fabric tenant settings related to
external users, and they rely on how Azure AD is set up. Also, certain security
capabilities for the Power BI workload only work when using the planned invitation
approach for external users in Azure AD.
Real-time control policies: You may choose to set up real-time session control
policies, which involves both Azure AD and Microsoft Defender for Cloud Apps. For
example, you can prohibit the download of a Power BI report when it has a specific
sensitivity label.
Checklist - When considering how to work with your Azure AD administrators, key
decisions and actions include:
" Identify your Azure AD administrators: Make sure you know the Azure AD
administrators for your organization. Be prepared to work with them as needed.
" Involve your Azure AD administrators: As you work through your implementation
planning process, invite Azure AD administrators to pertinent meetings and involve
them in relevant decision-making.
Home region
The home region is important because:
The home region for the organization's tenant is set to the location of the first user that
signs up. If most of your users are located in a different region, that region might not be
the best choice.
You can determine the home region for your tenant by opening the About Microsoft
Fabric dialog window in the Fabric portal. The region is displayed next to the Your data is
stored in label.
You may discover that your tenant resides in a region that isn't ideal. You can use the
Multi-Geo feature by creating a capacity in a specific region (described in the next
section), or, you can move it. To move your tenant to another region, your global
Microsoft 365 administrator should open a support request.
The relocation of a tenant to another region isn't a fully automated process, and some
downtime is involved. Be sure to take into consideration the prerequisites and actions
that are required before and after the move.
Tip
Because a lot of effort is involved, when you determine that a move is necessary,
we recommend that you do it sooner rather than later.
Checklist - When considering the home region for storing data in your tenant, key
decisions and actions include:
" Identify your home region: Determine the home region for your tenant.
" Initiate the process to move your tenant: If you discover that your tenant is located
in an unsuitable geographic region (that can't be solved with the Multi-Geo
feature), research the process to move your tenant.
Regulatory, industry, or legal requirements can require you to store certain data
elsewhere from the home region (described in the previous section). In these situations,
you can benefit from the Multi-Geo feature by creating a capacity in a specific region. In
this case, you must assign workspaces to the correct capacity to ensure that the
workspace data is stored in the desired geographic location.
7 Note
The Multi-Geo feature is available with any type of capacity license (except shared
capacity). It's not available with Premium Per User (PPU) because data stored in
workspaces assigned to PPU is always stored in the home region (just like shared
capacity).
Checklist - When considering other specific data regions for your tenant, key decisions
and actions include:
" Identify data residency requirements: Determine what your requirements are for
data residency. Identify which regions are appropriate, and which users might be
involved.
" Investigate use of the Multi-Geo feature: For specific situations where data should
be stored elsewhere from the home region, investigate enabling Multi-Geo.
Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.
Tip
To learn how to manage a Fabric tenant, we recommend that you work through the
Administer Microsoft Fabric module.
Power BI implementation planning: User
tools and devices
Article • 08/31/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This article introduces key considerations for planning user tools and managing devices
to enable and support Power BI consumers and authors in the organization. This article
is targeted at:
Center of Excellence (COE) and BI teams: The teams that are responsible for
overseeing Power BI in the organization. These teams include decision makers who
need to decide which tools to use for creating Power BI content.
Fabric administrators: The administrators who are responsible for overseeing
Fabric in the organization.
IT and infrastructure teams: Technical staff who installs, updates, and manages
user devices and machines.
Content creators and content owners: Users who need to communicate with
colleagues and make requests for what they need to have installed.
One important aspect of analytics adoption is ensuring that content consumers and
content creators have the software applications they need. The effective management of
tools–particularly for users who create content–leads to increased user adoption and
reduces user support costs.
Software requests
User license requests
Training requests
Data access requests
Process for users to request software installation. There are several ways to
handle software installation requests:
Common tools can be included in a standard machine setup. IT teams
sometimes refer to it as the standard build.
Certain applications might be installed automatically based on job role. The
software that's installed could be based on an attribute in the user profile in
Microsoft Entra ID (Azure Active Directory).
For custom requests, using a standard request form works well. A form (rather
than email) builds up a history of requests. When prerequisites or more licenses
are required, approval can be included in the workflow.
Process for installing software updates. The timely installation of software
updates is important. The goal is to stay as current as possible. Be aware that users
can read online what's possible and might become confused or frustrated when
newer features aren't available to them. For more information, see Client tools later
in this article.
Checklist - When planning for how to handle requests for new tools, key decisions and
actions include:
" Decide how to handle software requests: Clarify who's responsible for receiving
and fulfilling new requests for software installation.
" Confirm whether prerequisites are required: Determine what organizational
prerequisites exist related to training, funding, licensing, and approvals prior to
requesting software to be installed.
" Create a tracking system: Create a system to track the status and history of
software requests.
" Create guidance for users: Provide documentation in the centralized portal for how
to request new tools and software applications. Consider co-locating this guidance
with how to request licenses, training, and access to data.
The most common ways that a consumer can access Power BI content include:
Power BI service Content consumers view content by using a web browser (such as
Microsoft Edge).
Teams Content consumers who view content that's been published to the Power
BI service by using the Power BI app for Microsoft Teams. This option is
convenient when users spend a lot of time in Teams. For more
information, see Guide to enabling your organization to use Power BI in
Microsoft Teams .
Power BI Mobile Content consumers who interact with content that's been published to
Application the Power BI service (or Power BI Report Server) using iOS, Android, or
Windows 10 applications.
OneDrive/SharePoint Content consumers who view Power BI Desktop (.pbix) files that are
viewer stored in OneDrive or SharePoint by using a web browser. This option is a
useful alternative to sharing the original Power BI Desktop files. The
OneDrive/SharePoint viewer is most suitable for informal teams who want
to provide a friendly, web-based, report consumer experience without
explicitly publishing .pbix files to the Power BI service.
Power Apps solutions Content consumers who view content from the Power BI service that's
embedded in a Power Apps solution.
Custom application Content consumers who view content from the Power BI service that's
been embedded in a custom application for your organization or for your
customers.
7 Note
This list isn't intended to be an all-inclusive list of ways to access Power BI content.
Because the user experience can vary slightly between different web browsers, we
recommend that you document browser recommendations in your centralized portal.
For more information, see Supported browsers for Power BI.
Checklist - When planning for consumer tools, key decisions and actions include:
" Use a modern web browser: Ensure that all users have access to a modern web
browser that's supported for Power BI. Confirm that the preferred browser is
updated regularly on all user devices.
" Decide how Teams should be used with Power BI: Determine how users currently
work, and to what extent Teams integration is useful. Set the Enable Teams
integration and the Install Power BI app automatically tenant settings in the Fabric
admin portal according to your decision.
" Enable and install the Teams app: If Teams is a commonly used tool, enable the
Power BI app for Microsoft Teams. Consider pre-installing the app for all users as a
convenience.
" Decide whether viewing Power BI Desktop files is permitted: Consider whether
viewing Power BI Desktop files stored in OneDrive or SharePoint is allowed or
encouraged. Set the Users can view Power BI files saved in OneDrive and SharePoint
tenant setting according to your decision.
" Educate users: Provide guidance and training for content creators on how to make
the best use of each option, and where to securely store files. Include
recommendations, such as preferred web browsers, in your centralized portal.
" Conduct knowledge transfer with the support team: Confirm that the support
team is prepared to answer frequently asked questions from users.
There are several tools that content creators can use to author Power BI content. Some
tools are targeted at self-service content creators. Other tools are targeted at advanced
content creators.
Tip
This section introduces the most common authoring tools. However, an author
doesn't need all of them. When in doubt, start by only installing Power BI Desktop.
Power BI service Content consumers and creators who develop content by using a web
browser.
Power BI Desktop Content creators who develop data models and interactive reports that will
be published to the Power BI service.
Power BI Desktop Content creators who develop data models and interactive reports that will
Optimized for be published to Power BI Report Server (a simplified on-premises report
Report Server portal).
Power BI Report Report creators who develop paginated reports that will be published to
Builder the Power BI service or to Power BI Report Server.
Power BI App for Content creators and consumers who interact with content in the Power BI
Teams service, when their preference is to remain within the Microsoft Teams
application.
Power BI Mobile Content creators and consumers who interact with and manage content
Application that's been published to the Power BI service (or Power BI Report Server)
using iOS, Android, or Windows 10 applications.
Excel Content creators who develop Excel-based reports in workbooks that may
include PivotTables, charts, slicers, and more. Optionally, Excel workbooks
can be viewed in the Power BI service when they're stored in SharePoint or
OneDrive for work or school.
Third-party tools Advanced content creators may optionally use third-party tools and extend
the built-in capabilities for purposes such as advanced data model
management and enterprise content publishing.
Tip
We recommend that you adopt one method of working and then consistently use
that method. For example, when content creators are inconsistent about using
Power BI Desktop versus the Power BI service for report creation, it becomes much
harder to determine where the original report resides and who's responsible for it.
Web-based authoring
The capabilities in the Power BI service for authoring and editing content are continually
evolving (alongside capabilities for viewing, sharing, and distributing content). For
content creators that use a non-Windows operating system (such as macOS, Linux, or
Unix), web-based authoring in the Power BI service is a viable option. Web-based
authoring is also useful for organizations that aren't capable of keeping Power BI
Desktop updated on a timely basis.
7 Note
Because the Power BI service is a web application, Microsoft installs all updates to
ensure it's the latest version. That can be a significant advantage for busy IT teams.
However, it's also important to closely monitor when releases occur so that you're
informed about feature changes.
There are some types of Power BI items that can be created in the web-based
experience, such as:
Dataflows
Datamarts
Paginated reports
Power BI reports
Dashboards
Scorecards
A Fabric solution can be created end-to-end in a browser. The solution may include
Power BI items, and also non-Power BI items (such as a lakehouse).
) Important
When choosing to create content in the browser, it's important that you educate
content creators where to save content. For example, it's easy to save a new report
to a personal workspace, but that's not always an ideal choice. Also, it's important
to consider how versioning will be handled (such as Git integration).
Power BI Desktop
Because it's a free application, Power BI Desktop is a great way for content creators to
get started with developing data models and creating interactive reports. Power BI
Desktop allows you to connect to many data sources, combine data from multiple data
sources, clean and transform data, create a data model, add DAX calculations, and build
reports within a single application. Power BI Desktop is well-suited to building
interactive reports with a focus on exploration.
7 Note
There are many options and settings in Power BI Desktop that significantly affect
the user experience. Not all settings can be programmatically maintained with
group policy or registry settings (described later in this article). One key setting
relates to preview features that users can enable in Power BI Desktop. However,
preview features are subject to change, have limited support, and may not always
work in the same way in the Power BI service (during the preview period).
We recommend that you only use preview features to evaluate and learn new
functionality. Preview features shouldn't be used for mission-critical production
content.
Like the standard version of Power BI Desktop, content creators can use Power BI
Desktop for Report Server to create .pbix files. It supports publishing content to Power
BI Report Server. New versions align with the release cadence of Power BI Report Server,
which is usually three times per year.
It's important that content creators use the correct report server version of Power BI
Desktop to avoid compatibility issues after content has been published to Power BI
Report Server. You can manually download and install Power BI Desktop for Report
Server from the Microsoft Download Center.
For users who publish content to both the Power BI service and Power BI Report Server,
there are two options.
Option 1: Only use Power BI Desktop for Report Server because it produces files
that can be published to both the Power BI service and the report server. New
authoring features will become available to users approximately every four months
(to remain consistent with the Power BI Report Server release cadence).
Pros:
Content creators only need to use one tool.
Content creators are assured that the content they publish is compatible with
the report server.
Fewer tools are simpler to manage.
Cons:
Some features that are only supported in the Power BI service aren't available
in the Report Server version of Power BI Desktop. Therefore, content creators
may find it limiting.
New features are slower to become available.
Preview features aren't available.
Option 2: Run both versions—Power BI Desktop, and Power BI Desktop for Report
Server—side by side.
Pros:
All features in standard Power BI Desktop are available to be used.
New features for the standard Power BI Desktop are available more quickly.
Preview features for the standard Power BI Desktop are available to use, at
the content creator's discretion.
Cons:
Content creators must be prepared for complexity because they need to
remember which version to use when, based on the target deployment
location. The risk is that when a .pbix file from the newer version is
inadvertently published to Power BI Report Server, it may not function
correctly. For example, data model queries fail, data refresh fails, or reports
don't render properly.
Content creators need to be aware of the default behavior when they directly
open .pbix files (instead of opening them from within Power BI Desktop).
Microsoft Excel
Many business users are proficient with Microsoft Excel and want to use it for data
analysis by using PivotTables, charts, and slicers. There are other useful Excel features as
well (such as cube functions ) that allow greater flexibility and formatting when
designing a grid layout of values. Some content creators might also prefer to use Excel
formulas for some types of calculations (instead of DAX calculations in the data model),
particularly when they perform data exploration activities.
Here are several ways to efficiently use Excel with Power BI.
There are other ways to work with Excel. These options are less optimal, and so you
should use them only when necessary.
Export to Excel: Many users have established a habit of exporting data to Excel
from reports or dashboards. While Power BI supports this capability, it should be
used cautiously and in moderation because it results in a static set of data. To
ensure that data exports to Excel aren't overused, users in the organization should
be educated on the downsides of exports and administrators should track exports
in the user activity data.
Get source data from Excel: Excel can be used as a data source when importing
data to Power BI. This capability works best for small projects when a user-friendly
Excel-based solution is required to maintain source data. It can also be useful to
quickly conduct a proof of concept (POC). However, to reduce the risk associated
with Excel data sources, the source Excel file should be stored in a secure, shared
location. Also, column names shouldn't be changed to ensure data refreshes
succeed.
Tip
We recommend that you primarily encourage the use of Excel as a live connection.
Here are some important points to consider when determining whether Excel is an
appropriate authoring tool.
Tip
Paginated reports are best suited to highly formatted, or pixel-perfect, reports such as
financial statements. They're also suitable for reports that are intended to be printed or
for PDF generation, and when user input (with report parameters) is required.
Tip
For other scenarios that favor choosing paginated reports, see When to use
paginated reports in Power BI.
Here are some important points to consider when deciding on using Power BI Report
Builder.
Approach working in Power BI Report Builder with a different mindset than when
you work in Power BI Desktop. A paginated report always focuses on the creation
of one individual report (conversely, a dataset created in Power BI Desktop may
serve many different reports).
Developing paginated reports involves more skill than creating Power BI reports.
However, the main benefit is fine-grained control over data retrieval, layout, and
placement.
A paginated report is concerned with both data retrieval and layout. You're
required to develop a query (known as a dataset—not to be confused with a Power
BI dataset) to retrieve data from an external data source, which might involve
writing a native query statement (in DAX, T-SQL, or other language). The dataset
belongs to one report, so it can't be published and used by other paginated
reports.
Report consumers become accustomed to the built-in interactivity of Power BI
reports. However, report interactivity isn't a strength of paginated reports.
Attempting to achieve similar interactivity in paginated reports can be challenging
or impossible.
If you need to access data by using a database stored procedure (such as an Azure
SQL database stored procedure), that's possible with paginated reports.
There are some feature differences and unsupported capabilities depending on
whether the paginated report is published to the Power BI service or Power BI
Report Server. We recommend that you conduct a proof of concept to determine
what's possible for your target environment.
Tip
For more information, see Paginated reports in Power BI FAQ and Design tips for
reports in Power BI Report Builder.
Third-party tools
Advanced content creators can choose to use third-party tools, especially for enterprise-
scale operations. They can use third-party tools to develop, publish, manage, and
optimize data models. The goal of these tools is to broaden the development and
management capabilities available to dataset creators. Common examples of third-party
tools include Tabular Editor, DAX Studio, and ALM Toolkit. For more information, see the
advanced data model management usage scenario.
7 Note
The use of third-party tools has become prevalent in the global Power BI
community, especially by advanced content creators, developers, and IT
professionals.
There are three main ways to use third-party tools for dataset development and
management.
Use an external tool to connect to a local data model in Power BI Desktop: Some
third-party tools can connect to the data model in an open Power BI Desktop file.
When registered with Power BI Desktop, these tools are known as external tools
and extend the native capabilities of Power BI Desktop.
Use the XMLA endpoint to connect to a remote data model in the Power BI
service: Some third-party tools can use the XML for Analysis (XMLA) protocol to
connect to a dataset that's been published to the Power BI service. Tools that are
compliant with the XMLA protocol use Microsoft client libraries to read and/or
write data to a data model by using Tabular Object Model (TOM) operations.
Use a template file to connect to a local data model in Power BI Desktop: Some
third-party tools distribute their functionality in a lightweight way by using a Power
BI Desktop template (.pbit) file.
Some third-party tools are proprietary and require a paid license (such as Tabular Editor
3). Other community tools are free and open source (such as Tabular Editor 2, DAX
Studio, and ALM Toolkit). We recommend that you carefully evaluate the features of
each tool, cost, and its support model so you can sufficiently support your content
creators.
Tip
Some organizations find it easier to get a new tool approved that's fully supported
(even when there's a licensing cost). However, other organizations find it easier to
get a free open-source tool approved. Your IT department can provide guidance
and help you do the necessary due diligence.
Checklist - When planning for authoring tools, key decisions and actions include:
" Decide which authoring tools to encourage: For self-service creators and advanced
content creators, consider which of the available tools will be actively promoted for
use in the organization.
" Decide which authoring tools will be supported: For self-service creators and
advanced content creators, consider which of the available tools will be supported
and by whom.
" Evaluate the use of third-party tools: Consider which third-party tools will be
allowed or encouraged for advanced content creators. Investigate the privacy
policy, licensing cost, and the support model.
" Create guidance for content creators: Provide guidance and training to help users
choose and use the appropriate authoring tool for their circumstances.
Client tools
IT often uses the term client tools to refer to software that's installed on client machines
(user devices). The most common Power BI software installed on a user device is Power
BI Desktop.
Because Microsoft usually updates Power BI Desktop every month, it's important to have
a seamless process for managing installations and updates.
Here are several ways that organizations can manage installations and updates of Power
BI Desktop.
Type of Supports Description
installation automatic
updates
Microsoft Yes Power BI Desktop is distributed from the Microsoft Store . All
Store updates, including bug fixes, are automatically installed. This option
is an easy and seamless approach, provided that your organization
doesn't block some (or all) apps from the Microsoft Store for some
(or all) users.
Manual No You can manually download and install an executable (.exe) file from
installation the Microsoft Download Center . However, be aware that the user
who installs the software must have local administrator rights—in
most organizations, those rights are restricted. If you choose to use
this approach (and it isn't managed by IT), there's a risk that users
will end up with different versions of Power BI Desktop installed,
possibly resulting in compatibility issues. Also, with this approach,
every user will need to be notified to install quick fix engineering
(QFE) releases, also known as bug fixes, when they come out.
It's important that user devices have adequate system resources. To be productive,
content creators who work with large data volumes may need system resources that
exceed the minimum requirements—especially memory (RAM) and CPU. IT may have
suggested machine specifications based on their experience with other content creators.
All content creators collaborating on Power BI development should use the same
version of the software—especially Power BI Desktop, which is usually updated every
month. We recommend that you make updates automatically available to users because:
Multiple content creators who collaborate on a Power BI Desktop file are assured
of being on the same version. It's essential that creators who work together on the
same .pbix file use the same software version.
Users won't have to take any specific action to obtain updates.
Users can take advantage of new capabilities, and their experience is aligned to
announcements and documentation. It can impact adoption and user satisfaction
when content creators learn about new capabilities and features, yet they
experience long delays between software updates.
Only the latest version of Power BI Desktop is supported by Microsoft. If a user has
an issue and files a support ticket, they'll be asked by Microsoft support to
upgrade their software to the latest version.
In addition to Power BI Desktop (described previously), you may need to install and
manage other Microsoft tools or third-party tools on user devices, including mobile
devices. For a list of possible tools, see Available tools for authoring earlier in this article.
Users who create and manage files located in Fabric OneLake might also benefit from
OneLake File Explorer. This tool allows them to conveniently upload, download, edit, or
delete files in OneLake by using Windows file explorer.
7 Note
Your IT department may have managed device policies in place. These policies
could control what software can be installed, and how it's managed.
) Important
For data sources that require connectivity through a gateway, the same drivers,
connectors, and providers will need to be installed on each data gateway machine.
Missing components on a data gateway are a common reason for data refresh
failures once content has been published to the Power BI service.
Tip
To simplify delivery to a larger number of users, many IT teams deploy the most
common drivers, connectors, and providers as part of a standard user device setup.
Teams, OneDrive for Business, SharePoint: Self-service content creators often save
files in Teams, OneDrive for work or school, or SharePoint. Users find these tools
are familiar and simple to use. Shared libraries can be organized, secured for
appropriate coworkers, and versioning is built in.
Source control plug-ins: Advanced content creators may need to integrate with a
source control tool. It typically involves installing Git for source control, then
using a source control management tool like Visual Studio Code to commit
content changes to a remote repository, such as Azure DevOps Repos. For Power
BI Desktop, creators can use developer mode. In this mode, content is saved as a
Power BI project (.pbip) file, which is compatible for use with a preferred source
control system. When working with Fabric, Git integration is supported for working
with a client tool.
Custom visuals
Power BI custom visuals, which developers can create by using the Power BI visuals
SDK , allow Power BI report creators to work beyond the built-in core visuals. A custom
visual can be created and released by Microsoft, software developers, vendors, or
partners.
To use a custom visual in Power BI Desktop, it must first be installed on the machine of
the content creator. There are several ways to distribute visuals to users.
) Important
If your organization is highly concerned about data privacy and data leakage,
consider governing all custom visuals through the organizational visuals repository.
Tip
Also, before you approve the use of a new custom visual, it's critical that you
evaluate any security and data privacy risks because:
Visuals execute JavaScript code and have access to the data that they
visualize.
Visuals can transmit data to an external service. For example, a visual may
need to transmit data to an API to run an AI algorithm or to render a map.
Just because a visual transmits data to an external service, it doesn't mean it's
untrustworthy. A visual that transmits data can't be certified.
You can specify whether uncertified visuals are allowed or blocked in Power BI Desktop.
To ensure users have a consistent experience in both Power BI Desktop and the Power BI
service, it's important that custom visuals be managed consistently in two places.
Tenant setting: The Add and use certified visuals only (block uncertified) tenant
setting allows or blocks using custom visuals when users create or edit reports in
the Power BI service.
Group policy: The group policy setting controls the use of custom visuals when
users create or edit reports in Power BI Desktop. If a content creator spent
considerable time creating content in Power BI Desktop that can't be displayed in
the Power BI service (due to a misaligned tenant setting), it would result in a
significant amount of user frustration. That's why it's important to keep them both
aligned.
You can also use group policy to specify whether data exports are allowed or blocked
from custom visuals.
Registry settings
The Windows operating system stores machine information, settings, and options in the
Windows registry. For Power BI Desktop, registry settings can be set to customize user
machines. Registry settings can be updated by group policy, which helps IT set up
default settings that are consistent for all users (or groups of users).
Here are several common uses of registry settings related to Power BI Desktop.
Disable notifications that a software update is available. That's useful when you're
certain that IT will obtain the Power BI Desktop update, perform validations, and
then push updates to user devices through their normal process.
Set the global privacy level. It's wise to set this setting to Organizational as the
default because it can help to avoid data privacy violations when different data
sources are merged.
Disable the Power BI Desktop sign-in form. Disabling the form is useful when
organizational machines are automatically signed in. In this case, the user doesn't
ever need to be prompted.
Tune Query Editor performance. This setting is useful when you need to influence
query execution behavior by changing defaults.
Disable the external tools ribbon tab. You might disable the ribbon tab when you
know you can't approve or support the use of external tools.
Tip
Usually, the goal isn't to significantly limit what users can do with tools. Rather, it's
about improving the user experience and reducing support needs.
Mobile device management
Many users like to interact with Power BI content on a mobile device, such as a tablet or
a phone, whether they're at home or traveling. The Power BI mobile apps for iOS,
Android, and Windows are primarily designed for smaller form factors and touch
screens. They make it easier to interact with content that's been published to the Power
BI service or Power BI Report Server.
You can specify app protection policies and device protection policies for managed and
unmanaged devices by using Microsoft Intune. Intune is a software service that provides
mobile device and application management, and it supports mobile application
management (MAM) policies. Policies can be set at various levels of protection.
Optionally, a mobile device management (MDM) solution from Microsoft 365, or a third
party, may also be used to customize the behavior of Power BI mobile apps. The Power
BI app for Windows also supports Windows Information Protection (WIP).
Here are several ways that you might choose to use MAM and MDM policies.
For more information about securing devices and data, see the Power BI security
whitepaper.
" Determine how Power BI Desktop will be updated: Consider how to install Power
BI Desktop (and other client tools). Whenever possible, ensure that updates are
automatically installed.
" Identify the necessary client tool prerequisites: Ensure that all prerequisite
software and packages are installed and updated regularly.
" Identify the necessary data connectivity components: Ensure that all drivers,
connectors, and providers that are required for data connectivity are installed and
updated regularly.
" Determine how to handle custom visuals: Decide how custom visuals will be
handled from AppSource and other sources. Set the Allow visuals created from the
Power BI SDK tenant setting and the Add and use certified visuals only tenant setting
to align with your decisions. Consider creating a process that allows users to
request a new custom visual.
" Set up group policy settings: Set up group policy to ensure that custom visuals are
managed the same way in Power BI Desktop as they are in the Power BI service.
" Set up registry settings: Set up the registry settings to customize user machines,
when applicable.
" Investigate mobile device management: Consider using app protection policies
and device protection policies for mobile devices, when appropriate.
Next steps
For more considerations, actions, decision-making criteria, and recommendations to
help you with Power BI implementation decisions, see Power BI implementation
planning.
Power BI implementation planning:
Workspaces
Article • 08/23/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This workspaces article introduces the Fabric workspace planning articles, which have an
emphasis on the Power BI experience. These articles are targeted at multiple audiences:
Fundamentally, a workspace is a container in the Fabric portal for storing and securing
content. Primarily, workspaces are designed for content creation and collaboration.
7 Note
The concept of a workspace originated in Power BI. With Fabric, the purpose of a
workspace has become broader. The result is that a workspace can now contain
items from one or more different Fabric experiences (also known as workloads).
Even though the content scope has become broader than Power BI, most of the
workspace planning activities described in these articles can be applied to Fabric
workspace planning.
Tenant-level workspace planning: Strategic decisions and actions that affect all
workspaces in the tenant.
Workspace-level planning: Tactical decisions and actions to take for each
workspace.
Next steps
In the next article in this series, learn about tenant-level workspace planning.
Power BI implementation planning:
Tenant-level workspace planning
Article • 08/23/2023
7 Note
This article forms part of the Power BI implementation planning series of articles.
This series focuses primarily on the Power BI workload within Microsoft Fabric. For
an introduction to the series, see Power BI implementation planning.
This article covers tenant-level Fabric workspace planning, with an emphasis on the
Power BI experience. It's primarily targeted at:
Secondarily, this article may also be of interest to self-service creators who need to
create, publish, and manage content in workspaces.
Because workspaces can be used in different ways, most tactical decisions will be made
at the workspace level (described in the next article). However, there are some strategic
planning decisions to make at the tenant level, too.
We recommend that you make the tenant-level workspace decisions as early as possible
because they'll affect everything else. Also, it's easier to make individual workspace
decisions when you have clarity on your overall workspace goals and objectives.
7 Note
The concept of a workspace originated in Power BI. With Fabric, the purpose of a
workspace has become broader. The result is that a workspace can now contain
items from one or more different Fabric experiences (also known as workloads).
Even though the content scope has become broader than Power BI, most of the
workspace planning activities described in these articles can be applied to Fabric
workspace planning.
Workspace creation permissions
The decision on who is allowed to create workspaces in the Power BI service is a data
culture and governance decision. Generally, there are two ways to approach this
decision:
All (or most) users are permitted to create new workspaces: This approach
usually aligns with existing decisions for other applications. For example, when
users are permitted to create their own SharePoint sites or Teams channels, it
makes sense that Fabric adopts the same policy.
Limited to a selective set of users who are permitted to create new workspaces:
This approach usually indicates a governance plan is in place or is planned.
Managing this process can be fully centralized (for instance, only IT is permitted to
create a workspace). A more flexible and practical approach is when it's a
combination of centralized and decentralized individuals. In this case, certain
satellite members of the Center of Excellence (COE), champions, or trusted users
have been trained to create and manage workspaces on behalf of their business
unit.
You should set up the Create workspaces tenant setting in the Fabric admin portal
according to your decision on who is allowed to create workspaces.
Checklist - When considering permissions for who can create workspaces, key decisions
and actions include:
It can be difficult to strictly enforce naming conventions when many users possess the
permission to create workspaces. You can mitigate this concern with user education and
training. You can also conduct an auditing process to find workspaces that don't
conform to the naming conventions.
The workspace name can convey additional information about the workspace, including:
Purpose: A workspace name should always include a description of its content. For
example, Sales Quarterly Bonus Tracking.
Item types: A workspace name can include a reference to the types of items it
contains. For example, use Sales Data to indicate the workspace stores items like a
lakehouse or datasets. Sales Analytics could indicate that the workspace stores
analytical reports and dashboards.
Stage (environment): A workspace name might include its stage. For example, it's
common to have separate workspaces (development, test, and production) for
lifecycle management.
Ownership and responsibility: A workspace name might include an indication of
who's responsible for managing the content. For example, use of an SLS prefix or
suffix can indicate that the sales team owns and manages the content.
Tip
To keep workspace names short, you can include additional detail in the workspace
description. However, make sure that the most relevant information is included in
the workspace name, particularly if you anticipate users will search for workspaces.
You can also use a workspace image to augment the workspace name. These
considerations are described further in the workspace settings section in the next
article.
Having consistent workspace names helps everyone. The user experience is improved
because users can find content more easily. Also, administrators can oversee the content
more easily when predictable naming conventions are used.
We recommend that you include the workspace naming conventions in your centralized
portal and training materials.
Use short yet descriptive names: The workspace name should accurately reflect its
contents, with the most important part at the beginning of the name. In the Fabric
portal, long workspace names may become truncated in user interfaces, requiring
the user to hover the cursor over the workspace name to reveal the full name in a
tooltip. Here's an example of a short yet descriptive name: Quarterly Financials.
Use a standard prefix: A standard prefix can arrange similar workspaces together
when sorted. For example: FIN-Quarterly Financials.
Use a standard suffix: You can add a suffix for additional information, such as
when you use different workspaces for development, test, and production. We
recommend appending [Dev] or [Test] suffixes but leaving production as a user-
friendly name without a suffix. For example: FIN-Quarterly Financials [Dev].
Be consistent with the Power BI app name: The workspace name and its Power BI
app can be different, particularly if it improves usability or understandability for
app consumers. We recommend keeping the names similar to avoid confusion.
Omit unnecessary words: The following words may be redundant, so avoid them
in your workspace names:
The word workspace.
The words Fabric or Power BI. Many Fabric workspaces contain items from
various workloads. However, you might create a workspace that's intended to
target only a specific workload (such as Power BI, Data Factory, or Synapse Data
Engineering). In that case, you might choose a short suffix so that the
workspace purpose is made clear.
The name of the organization. However, when the primary audience is external
users, i