Ramblings about data communities and your contributions, no excuses

I have been active in the data community throughout my career. I have met people and made friends in the process. As I look back on it, I am thankful I was involved and participated. I firmly believe you should as well.

Contents

  • The value of data communities
  • Why you should be a contributor
  • Write about it
  • Talk about it
  • Are you ready to ramble?

The value of data communities

I want to kick off this section with my experience with community. Then delve into the value of being involved in the data community.

A little history

When I started my consulting career in SQL Server nearly 25 years ago, I was a newbie. I came from a background in Microsoft Access. At some point, I got connected with some other SQL Server professionals in Minnesota. We decided to create a user group – Minnesota SQL Server User Group. Eventually, we joined PASS.

This was my first experience in the data community. I made many friends in this community. We supported each other’s skills and career growth. We helped each other with technical issues and shared technical wins.

The value of community is community

You should definitely participate in communities. The best option is to take the time, usually once a month, to engage in community. Meeting in person is preferred because you can focus on the people and the topic. Talk to each other. If you join a virtual group, participate! Don’t do something else during the meeting. Engage in the comments, Q&A, and any banter. While virtual group meetings are more convenient, the onus is on you to interact. If there is no option to interact with each other, it is a webinar, not a user group. Find a user group.

The value of the data community is the community. Yes, we can and will learn from each other. Knowing and networking with peers leads to more growth and maturity as a person and a professional.

Why you should be a contributor

Simply put, if everyone is consumer, the community dies. The community is not intended to be a school with a couple of teachers and a bunch of students. In a true community, we are all contributors. Every user group has consumers and contributors. To be clear, not everyone who contributes leads the group or gives talks. Some ask questions, others stick around for discussions, and some extend a hand to welcome others. Consumers come, listen, and leave. Introversion is not a good excuse. Some of the best contributors I know are introverts.

Storytelling

We all have a story to tell from “it’s all new to me” to “I have been doing this for 20 years.”  Everyone can give back to group through questions, advice, inclusion, and insights. The key is proactively engaging and including. In this way, we build friendships, grow careers, and expand horizons. You miss out on all this if you only consume.

I am going to expand on two specific types of contributions in the remainder of this post – writing and talking. These are two ways to tangibly contribute to the community.

Write about it

When you write it down, you will remember it more. Writing for others forces clarity, accuracy, and defensibility. I call this “writing with CAD.” When we write to share with the community, we are compelled to write this way.

Clarity

Writing for others forces us to be clear about the topic. We need to organize our thoughts and write with purpose. We have to answer questions about what we are writing and does it make sense. I think this includes good editing. I highly recommend using tools that check grammar and spelling like Microsoft Editor in Office and Edge. Be careful using tools like Grammarly and Copilot. Use AI to clarify thoughts and not generate them.

Accuracy

Accuracy is especially important in technical writing. We should never assume our readers can fill in the blanks. If you’re writing a step-by-step blog, make sure you have all the steps. Be sure to include any context or assumptions. While we cannot guarantee that we didn’t miss anything, we should do our best to be precise so our readers can reproduce, practice, or implement what we are writing about.

Defensibility

Are you prepared to defend what you are writing about? I don’t mean this negatively. You should be able to explain why you did it that way or why you think you are correct. Sometimes this is part of the comment or post. Other times you just need to be prepared to answer questions. Defensibility is about being prepared.

To be clear, defensibility does not mean you are always right. Be prepared to hear new ideas and accept corrections. You can’t and won’t know everything. But it is important to know your “why.”

My motivation to write

When I started to blog, I made the mistake of writing for others. I made a decision early on to change the goal of my content. No longer would I try to write about what I think people would read. I decided to write for me. My technical writing became my personal knowledge base. If no one reads it, oh well. It was the best decision I made, and I recommend it for new bloggers all the time.

One great example from my writing is my series on Excel.  I was embedding Excel workbooks into SharePoint. The workbooks were backed by SQL Server Analysis Services cubes. The goal was to make elegant dashboards without looking like Excel. This tips and tricks series has many of my most read posts and some are still being viewed today. I wrote them as a reference for me, and others still find them helpful.

Next, I want to look at some ways you can contribute to the community through writing. It’s not just about blogging.

Where to write

If you’re interested in starting to write, here are some good options. If you’re already writing, maybe try something new.

Blogging

This is where I started in 2010. However, it’s not necessarily the easiest option. There are many decisions to be made to launch a blog.

  • Where to host it? I use and like WordPress. We are still using the free version. I have seen blogs on Medium lately. Check out this article on the best options for free blogging platforms.
  • What to call it? You can be creative here.
  • Deciding on your first article.
  • Where to promote it? LinkedIn, Facebook, X?

One reason I like blogging is that I own and control my content. I can point people to my blog, and it is clearly my work.

LinkedIn articles

LinkedIn articles can be a nice way to start writing. You can also promote a newsletter for people to subscribe to. It comes with a built-in promotional platform. You are on LinkedIn after all.

Data on Rails

Not sure where to start? We have a shared blog site where you can write a couple of posts to see if you like blog writing. We will promote your work as well.

If you like it and want to start your own blog, great! We encourage you to take your content to help kick it off. You are always welcome to keep writing here if you prefer to.

Commenting

The last area I want to cover is commenting. This is a great way to piggy back onto topics, content, or questions posted by others. You can share your thoughts, insights, and stories with others easily. Some of these can serve as prompts for your own content. Here are a few options for joining the conversation.

  • LinkedIn
  • Reddit
  • Microsoft forums

Talk about it

Most technologists are afraid of public speaking. Even now some of you are getting queasy just thinking about it. But talking about what you know and what you are learning is a great way to give back to the community. Speaking on a topic requires you to succinctly describe what you are talking about.

I have used speaking opportunities to share my experience with a product, pattern, and code. What I have found is that I have gaps to fill in about the topic. So, I learned more than I knew before I started.

I know getting started can be hard. Here is a pattern that may be helpful.

  1. Choose
  2. Prepare
  3. Practice
  4. Present
  5. Improve

Let’s break these down.

Choose

When you start out, choose something you are familiar with and think is cool. You should be excited and comfortable with your topic. Also, try to be concise. A narrow topic is easier to prepare for.

Prepare

This is where most people get stuck. They start out with the wrong questions. What do I need? A presentation? A demo? Wrong questions? What you really need is the outline. An outline will help you stay focused. Start out by identifying three, no more, no less, points to make about your topic. I would recommend writing them down.

Once you have them ready, fill in the blanks.” What do you want to say about each point? Do you have sample code? A picture of a whiteboard? Lessons learned? Compile these into a document. Now you can expand your outline. It could look something like this :

  • Topic: working with window functions in SQL Server
  • What are window functions
    • What they do
    • Why I needed them
  • How to build one
    • Sample code
    • Define key functions
      • Partition
      • Over
      • Order by
  • Aggregation
    • Sample code of my use case

The presentation

Now you can build a presentation. You should start a slide deck with the following slides.

  1. Title. Includes topic and your name.
  2. Who you are. Name, role, something interesting about you.
  3. Introduction. Topic, why the topic interests you.
  4. First point.
  5. Second point.
  6. Third point.
  7. Lessons learned. How did it help you? Or a quick summary about how to use it.
  8. Thank you. Q & A, references.

You may need extra slides for some of your points, especially if you have sample code or diagrams. Don’t add to many extra slides. Remember that the slides support what you are talking about. If want to read something, use a printed document. DO NOT JUST READ YOUR SLIDES!

Practice

People practice in different ways. You should try a couple to find what works best for you. Whatever works for you is the right way for you to prepare. Here are some examples that you could use:

  • Practice in front of a mirror
  • Use PowerPoint’s timing feature
  • Run through it with a friend
  • Do a dry run with an experienced speaker
  • Rehearse in your head

Present

You get to do your presentation. Exciting!

Improve

After your presentation be critical of your presentation, in a good way. We can always improve. If the event has reviews, use to make improvements. Don’t try change everything, focus on one or two things.

Demos are risky. If something goes wrong, have a backup ready. You don’t want to troubleshoot your demo live. I had slides with screenshots ready to go. Everyone has demos fail. I don’t recommend demos for your first presentation.

Where to speak

Next, we will look at some good opportunities to speak at.  

User groups

User groups are a great opportunity to speak to a friendly audience.

Lunch and learn

Lunch and learns are an informal way to get comfortable talking about your topic. Usually, you do these with your peers at work or with a client.

Small conferences

SQL Saturdays, Days of Data, and Data Saturdays are examples of small conferences. These are the next step after user groups.

Calls for presenters

You can find many opportunities to present when you are ready to stretch your wings here.

Career growth

While community involvement benefits the community, it also benefits your career. Not only can you build up your resume, but you can also build up your professional network.

Are you ready to ramble?

Well, if you made this far, I hope I have inspired you to get involved. Many people have started out small and grew their professional career using these activities. Everyone can contribute, even you. Let us know what you do in the comments. We would love to hear from you.

Ramblings of a retired data architect

Let me start by saying that I have been working with data for over thirty years. I think that just means I am old. I thought it would be fun to discuss my thoughts on various topics. This is my take and some of my thoughts are definitely “tongue in cheek.” So, enjoy the ride and feel free to share your take in the comments.

SQL, MDX, DAX – the languages of data

Ramblings of a retired data architect

Let me start by saying that I have been working with data for over thirty years. I think that just means I am old. Anyway, I have written blog posts, delivered presentations, and authored books on these languages through the years. Understanding and using these languages have grown and shaped my career through the years. I thought it would be fun to discuss my thoughts on each one. This is my take and some of my thoughts are definitely “tongue in cheek.” So, enjoy the ride and feel free to share your take in the comments.

  • SQL – ubiquitous and relational
  • MDX – complex and dimensional
  • DAX – formulaic and columnar
  • Thoughts and musings

SQL, structured query language

SQL is the oldest of the languages. It was designed to support relational databases (RDBMS). It is built on math principles to improve performance and optimize storage. Normalization rules were established to guide developers on the preferred approaches to building databases.

Why is SQL ubiquitous?

SQL is everywhere. SQL is the query language of choice for enterprise data platforms such as Microsoft SQL Server and Oracle. Open-source data platforms like MySQL and PostgreSQL are also built to use SQL.

How is this possible? SQL is an ANSI standard. This means that the core of the language is managed by a governing body. If you learn SQL, you should be able to write queries in all these databases, right? Sort of. You should be able to write a query like this in all the databases: SELECT field1, field2 FROM table WHERE field3 = 50.

However, vendors often implement their own variations of SQL to meet needs in their platform design or to provide nonstandard functionality to their users. (Engineering outpaces standards development.) For example, Microsoft created TSQL and Oracle created PL/SQL. One of my first experiences with this was returning a single row in query. I used TOP 1 in SQL Server, but there was no TOP keyword in Oracle.

Code examples

SQL Server

SELECT TOP 1 column1, column2 
FROM table_name
WHERE condition;

Oracle

SELECT column1, column2
FROM table_name
WHERE condition
AND ROWNUM = 1;

PostgreSQL / MySQL

SELECT column1, column2
FROM table_name
WHERE condition
LIMIT 1;

What does it all mean?

You can learn SQL and be efficient querying data from multiple data platforms. Whether you are a data engineer or a data analyst, you must know SQL if you are to be taken seriously as a data professional.

MDX, multidimensional expressions

I was introduced to MDX by SQL Server Analysis Services (SSAS). For me, it just clicked. More about that in a bit. MDX, like SQL, is heavily based on math. Whereas SQL is two dimensional (column and rows), MDX can theoretically use an unlimited number of dimensions. The number is limited in practice by the capability of the data platform. MDX was primarily used by two vendors, Microsoft and Hyperion.

One key difference between the two platforms is their purpose. Relational databases are optimized for transactions and small result sets. Multidimensional databases are built for analysis across huge datasets.

Why is MDX considered complex?

The toughest part for most data professionals is visualizing multidimensional datasets in their minds. Relational data is easy to visualize. It looks like a spreadsheet. Multidimensional data is not that simple. We call it a cube, but that is a simplistic representation with only three dimensions. It is a cool name though.

Earlier in my career I coached data consultants on their transition to BI consultants. As I helped a consultant with MDX, I told him at some point he would “get it.” I told him to call me when he did. Six months later he called me, told me that he got it, and hung up on me. Many consultants didn’t get it and either just forced their way through it or went back to relational.

MDX was designed to traverse dimensions, build sets, and aggregate values across those sets. I mentioned earlier that MDX made sense to me right away. The first time I was exposed to MDX, I learned about the various functions and methods to work with dimensions including child, parent, descendants, and ancestors. You could think of dimensions like family trees. I took a class in college about familial relationships which used similar concepts. My degree is in cultural anthropology.

  • Another difficult concept to master is context. You must understand the set or slice of data you are working with in a query.
  • Once you understand context, you need to realize that every measure is an aggregate of every dimension whether a part of the query or not.
  • Results can be shaped in multiple dimensions. Three or more dimensions cannot be visualized. If you want to visualize the data in a report, it needs to be formed into columns and rows.

Multidimensional databases and MDX are extremely powerful but complex. I enjoyed working with them and became one of the few experts in the technology. However, multidimensional databases and MDX are rarely used today. Microsoft is not advancing the technology, instead promoting columnar data structures.

DAX, data analysis expressions

My first experience with DAX was when PowerPivot was released with Excel. It was then that I saw the writing on the wall for MDX. DAX is simpler and more approachable than MDX. Microsoft then added tabular models built on the same data engine, Vertipaq. Eventually culminating in the Power BI model. The underlying data engine is a highly optimized columnar data structure.

Admittedly, I have the least amount of hands-on experience with DAX. However, I disliked it early on. Unlike SQL and MDX, DAX is not built around math principles and is not a query language. It is built with expressions. Instead of SELECT, it starts with an equal sign (=). This is more intuitive for Excel users. Early on, it was very frustrating for me.

DAX is continually being improved. Microsoft is also continuing to improve the underlying data engine and storage subsystem. Power BI models are one of the foundational building blocks of Microsoft Fabric.

Should you learn DAX?

If your business uses Power BI, then yes. DAX is used to aggregate, shape, and format data for usage by end users. It is not necessary for data engineers who don’t present data to end users.

Thoughts and musings

My first recommendation is to learn SQL if you want to be taken seriously as a data professional. It has been around since the beginning and will be around for a while to come. Relational data platforms are integrating columnar data storage technology which gives SQL users access to the performance available in Power BI models.

While I contend that MDX is more powerful, I concur that DAX is more approachable. As MDX goes the way of COBOL, SQL remains the powerhouse. Learn DAX if you intend to use Power BI, otherwise don’t bother.

That is my 2 cents. Have a different opinion? Sound off in the comments below.

T-SQL Tuesday #193 – Notes to my past self and from my future self

It has been a while since my last T-SQL Tuesday blog. When I saw Mike Walsh’s topic for T-SQL Tuesday #193, I was intrigued and inspired – “Notes to yourself from the past and the future.” It has been a year and a half since I went on full time disability due to ALS. I worked for as long as I was able contribute well. It was a sad but necessary reality. This very reality feeds into my notes.

Note to my past self, ten years ago

Don’t allow others to influence you away from your passion for data excellence and leadership. When you are told that becoming a Microsoft MVP is only a personal (or selfish) endeavor and will not help the company and it doesn’t matter, DON’T LISTEN! They are showing how little they know or understand you. The same is true about your data community involvement.

I was awarded the Microsoft MVP award for my work on Microsoft Fabric about a year and a half ago. It was then I learned how much access you get to the engineering teams at Microsoft and a worldwide network of fellow pros. Throughout my years working for Microsoft partners, I have been on data partner advisory councils. I appreciated that exposure, but MVPs already knew what we were hearing for the first time. I can only imagine how I could have helped steer the technological direction of my companies with that insight.

A quick soapbox… It never ceases to amaze me how many MVPs and advisory members do not give feedback and recommendations back to Microsoft. These relationships should be mutually beneficial. From personal experience, I know that giving feedback to Microsoft is beneficial. Many times, Microsoft teams and others thought I was an MVP even though I was not. It was because of my feedback on the data platform through other channels.  …end of soapbox.

My overall point to my past self is that you should pursue the path that you see as right for you. Don’t let the naysayers deter you. You need to play to your strengths.

Note from your future self

Before I start this, living ten more years would be amazing. ALS life expectancy is 2-5 years, and I am in year four.

Don’t stop what you are doing. It will continue to be easier and more efficient for you to create. Don’t be afraid of continuing to contribute to the data community. Your mind still works. It will take a lot of patience to work with data tools that are not eye gaze friendly. Don’t let that deter you. Have fun, find a lane and run with it.

I just started getting back to technology. I created the Data on Wheels ~ ALS website, only using eye gaze technology. “Look mom, no hands!” It has been a great experience for me and got me back into technology. I tried using Power BI for the data analysis but quickly realized that I needed Microsoft Fabric to do the work I wanted to do which is out of reach financially (trial capacities are time boxed). So maybe I will see what I can do to solve that problem.

Wrapping it up

I listened to company leaders early in my career. They were wrong. I trusted their input too much. I should have sought additional advice. It would have been better for me and the companies I enjoyed working for.

Today, I have the desire but not the patience or means in some cases to data work. I enjoyed it a lot. I should not give up. I look forward to sharing what I learn.

My advice to everyone is to follow your passions and find enjoyment in your career. Seek council from many different perspectives in your pursuit.

Power BI, Excel, OneLake – Dreams Do Come True!

I can’t believe it’s finally here! A way to have Excel live in OneDrive and access it from Power BI nearly live! We can officially short cut files to our OneLake from both SharePoint and OneDrive! I am super excited about this feature, and I hope you are too. This feature plus User Data Functions allows us to not only have data from Excel in our reports but keep it as fresh as needed. Imagine having budget allocations that you want to adjust right before or during a meeting. Now you can! You can edit a file in Excel and hit one button to see the new numbers in your report. In the past, we relied on 3rd party services or Power Apps licensing to accomplish this sort of experience. Now we can just use Excel, an old data friend.

Please note, THIS IS IN PREVIEW AND VERY NEW. I’ve included a ton of screenshots, but please be advised these may not be entirely reflective of the GA reality once this feature is released. My example uses a OneDrive folder, but you can easily do this with SharePoint as well! One caveat, you will need a Fabric capacity. This does work with a trial capacity.

  1. Creating the ShortCut in OneLake to OneDrive Folder
  2. Connecting to the File in Power BI
  3. Creating a Refresh Schedule in Power BI Service
  4. Optional – creating a manual refresh button using Translytical Task Flows
  5. Additional Resources

Creating the ShortCut in OneLake to OneDrive Folder

1 – Navigate to OneDrive online: https://onedrive.live.com/login

2 – Select the settings gear in the top right corner and select “OneDrive settings”.

3 – On the left-hand panel, select “More Settings” then scroll all the way down to the Diagnostic Information. From there, copy the OneDrive web URL. This is what we will use in Fabric to make the short cut.
NOTE – you will need to delete everything in that URL after the _com (in the screenshot below that’s “_layouts/15/onedrive.aspx”.

4 – Navigate to a lakehouse in Fabric where you would like to access the content from.
5 – Hit the Get data drop down then select “New shortcut”

6 – Choose the OneDrive option. As of this post, it is currently in Preview and says “OneDrive (Preview)” on the button.

7 – Create a new connection. The Site URL will be what you copied from the OneDrive settings. I recommend renaming the connection to something like “[Name]’s OneDrive” so it’s clear where the data comes from.

8 – Now that we have a connection, you can point the shortcut to any folder in your OneDrive! Please note, it will grab all children folders inside as well as files. Be mindful of what you actually want to share within OneLake. On the bright side, this means you only need one shortcut per folder hierarchy which makes this much easier if you have files in multiple subfolders you want to share.

9 – Once you have a folder selected, it may give you an option to transform data (may just be me with CSVs and JSON files in my folders lol). You can skip this by hitting “previous” then “skip” or “Revert Changes” at the top. If you want to transform your CSVs to delta tables, simply hit next. I haven’t played around with these auto transformations yet, so if you have notes let me know! Anything with “Auto” and “Preview” scares me, so I stay away until at least the “Preview” is gone haha. Also, currently this feature does not work. Your shortcut will simply disappear into the void. Really excited to see where this ends up in the future though!

10 – Hit “Create” and boom! You can now access files from your OneDrive inside OneLake! It may take a couple of refreshes on your browser and a few seconds, but then you can go to the Files section of your lakehouse and see the files/folders in your shortcut. NOTE – at first it will show the folder title in your “Tables” section. That means it’s working. Try refreshing your browser (and give it a couple of minutes) and it will pop up in your files section. It does flash a share warning to you, don’t worry about that and just give it a bit to load in the right section. The time it takes is directly proportionate to the amount of information/size of files you’re dropping. If you look at my screenshot, you can see I have one file that’s over 4 GB. Probably not the best folder to pull in (should have gone one level deeper to avoid that file since all I want is in the CSVs folder), but I wanted to see if it can handle it.

If needed, hit the three dots next to your folder and manually move it to files section.

You can see the date modified matches what’s in your OneDrive! Now to test the syncing, let’s make a change to the dim_customer.csv.
Original view:

Change made, took 40 seconds to sync from my laptop to OneDrive then about instantly it was available in Fabric (by the time I moved to that tab and refreshed my page it was there!).

Holy cow! This is a game changer! No longer do folks need to upload files manually using the OneLake explorer (very buggy in my experience). Now you can just short cut it in and allow your ETL process to always grab the latest version that’s been shortcut to OneLake!!

So that brings us to the next phase, how does this work with Power BI? Can we finally have a “live” experience with Excel file data in Power BI?

Connecting to the File in Power BI

1 – Open up your Power BI file. This should work in both the Power BI Desktop and the web Power BI experience, but my demo will use the Desktop.

2 – Connect to the lakehouse with our files. To connect to the files within a lakehouse, we’ll have to do a custom query since the main lakehouse connector only allows you to pull tables/views. Thankfully, a connector does already exist, it’s just not in our standard Get Data options. Create a blank query, then use the code below with your workspace and lakehouse id. To find the IDs, grab from the URL (https://app.fabric.microsoft.com/groups/WORKSPACEID/lakehouses/LAKEHOUSEID?experience=fabric-developer).

let
    workspaceID = "YOUR WORKSPACE ID",
    lakehouseID = "YOUR LAKEHOUSE ID",
    Source = Lakehouse.Contents(null){[workspaceId= workspaceID]}[Data]{[lakehouseId =lakehouseID]}[Data],
    Files_Folder = Source{[Id="Files",ItemKind="Folder"]}[Data]
in
    Files_Folder

3 – Navigate to your file by clicking on the link under “Data” (likely called “Folder”) > then the link under “Content” (also probably called “Folder”) > then the link under “Content” that ties to the file you want to open (mine was called “Binary” for the file I wanted). Now interact with it like any other file import in Power Query.

CSVs are relatively simple, so I pulled in an Excel file for the screenshots below to show what it looks like if you have formatted tables in your file. It shows BOTH formatted tables and sheets! Pretty awesome!

Creating a Refresh Schedule in Power BI Service

Now all that is already pretty amazing. We no longer need to mess with crazy links from Excel and can access all our data from the same place – OneLake! But how do we refresh it?

1 – Navigate to the semantic model settings.

2 – Under “Data Source Credentials” there will be a new source called “Lakehouse”. Open the “Edit Credentials” link and authenticate using the credentials you want to be used for the refresh (can be a service account if needed, but MUST be a OAuth2 source). Right now, there’s no way to connect other than OAuth2. Not great if you want to use service principals, but service accounts are still an option.

If you don’t need “live” updates, you can stop here. However, if you’re as patient as I am, then you’ll want a way to trigger a refresh of this data on the fly from within your report. Enter, translytical task flows.

Optional – creating a manual refresh button using Translytical Task Flows

1 – Create a User Data Function item in Fabric. It will be mostly blank because all we really want is a way to trigger the report refresh. This function will accept a name parameter and return a little message alerting the user that report refresh has been kicked off. Don’t forget to hit that “Publish” button in the top right corner to actually have this be live! The publishing process can take a bit, be sure it finishes publishing before looking for it in Power BI.

Here’s the code I’ll be using:

import datetime
import fabric.functions as fn
import logging

udf = fn.UserDataFunctions()

@udf.function()
def refresh_report(name: str) -> str:
    logging.info('Python UDF trigger function processed a request.')

    return f"Welcome {name}! Your report refresh will kick off at {datetime.datetime.now()}!"

2 – Navigate to Power BI Desktop and add blank button. You can add any button, but blank looks cleanest and is easiest to make clear what we want people to do.

3 – Add a Data function action and ensure the “Refresh Report” toggle is on. This is the key functionality we are looking for.

4 – Create a measure to populate the current user’s name automatically.

User = USERNAME()

5 – Add that measure to the button by selecting the little fx option next to name.

6 – Configure your button with some text to let people know what it does.

7 – Publish and test! Enjoy your new button! Keep in mind, this will refresh the WHOLE model. My model is fairly small and quick, so not a huge deal. There’s not currently a way to have it only refresh one table using the UI, but if you want to make a more complex UDF notebook, you can have it refresh ONLY the table that’s been impacted. Talk about powerful.

I will make a follow up blog soon that will cover how to adapt this method to only refresh the table you need, so stay tuned and hit the subscribe button for a ping when new blogs are published!

UPDATE – I looked into ways to only refresh one table, but it requires using client secret credentials and cannot use semantic link, mssparkutils, or a large number of other libraries available in other notebooks in Fabric. I’m hoping this will change long-term, but for now please refer to this blog on how to use the REST API in a standard python notebook to refresh Power BI: https://medium.com/@arvind.g90/refresh-smarter-not-harder-power-bi-automation-with-rest-api-python-63923b37c9a6 .

Additional Resources

Managing subscribers, creating newsletters

When I created the website on WordPress, I was expecting all the features I had on our WordPress.com which powers this website. As I called out in my previous post, this is not the case. I wanted a way to allow people to subscribe to my content. It turns out that I needed an email marketing solution which was quite a surprise.

The search is on

I had a very basic list of needs.

  • A way to subscribe
  • Subscriber management
  • Email subscribers

The first platform recommended by WPBeginner whose parent company is the source for many of my plug ins was Constant Contact. I then looked at HubSpot and MailChimp because I had heard of them. They met my needs but at a cost. If they had free options, they were limited in functionality. I ended up choosing Brevo which has a robust free plan.

To be clear, I needed a low to no cost solution to meet my needs and fill this gap in functionality. I do not generate income from my website, so a free option is required. Brevo fits the bill nicely.

Ancillary costs

As part of the setup, I needed to set my sender email address and domain. Both needed to be verified and compliant with the major email providers’ requirements. I had to create DKIM and DMARC records in my domain.

Screenshot of the sender settings page in an email marketing platform, displaying details about verified sender domains, DKIM and DMARC configurations, and sender management options.

Little did I know that this was not a straightforward process. The email I am using is a Microsoft 365 account hosted and managed by GoDaddy. First, getting the correct entry required me to contact GoDaddy for support. They worked on the backend to get me set up and create the entries I needed for DNS.

Note: If you manage your own Microsoft 365 and Azure accounts, you can do this process. It will require elevated permissions to both environments.

The second requirement to make this work is to turn on Advanced Email Security. I wanted to apply this to the email I was using, but it is a domain level setting. Six emails had to be updated. The cost of this required upgrade is the most expensive single cost of my website build out. On the plus side, we are getting far less spam. 😊

Image showcasing information about GoDaddy's Advanced Email Security, including details about online threats like malware, ransomware, and phishing.

Newsletters are a bonus

When I started working with Brevo, I was not sure how to send out emails. After some digging, I discovered campaigns. I am not an email marketer, so this was not obvious to me. I created my first campaign by following the steps provided. It was very easy to do.

Newsletter creation interface showing sender information, recipient count, subject line prompt, and design options.

I used a template to create my first newsletter. It was a very intuitive process. I used clicks to add new items to the newsletter. I think you can use drag and drop also. I created a couple of test campaigns to see how it all worked. I liked what I found. I created and sent my first newsletter shortly thereafter.

I really like the metrics. They tell me the delivery rate, the opens, and the clicks for each campaign. This has been helpful to understand the effectiveness of the newsletter.

Screenshot of a newsletter campaign report titled 'Newsletter #3 National Caregivers Month' showing details such as delivery rate, open rate, and click-through rate.

Managing subscribers

This functionality is the primary reason I wanted a solution. In Brevo, this work is done in the CRM. I started by creating a couple of lists. After I became familiar with lists, I imported a spreadsheet with family and friends who were invited to one of my children’s weddings. I used this to see who wanted to subscribe to the newsletter. I found a couple of additional lists that I imported to recruit newsletter subscribers. All the lists I used were ours and included people who we know or have expressed interest in our journey.

When I was done with the importing and recruiting, I had around 1000 contacts. About 50 were blocklisted due to hard bounces and unsubscribes. More than 80% were opened. In the end I had around 60 subscribers. Overall, this process went very well.

I also use list segments. I created segments to manage my daily email count which is limited to 300 emails a day in the free version. I am also using segments to help me welcome new subscribers.

There is more functionality in the CRM that I don’t use. Most of it is for sales pipeline like deals and tasks. I don’t have a need for this capability.

Automation

I use automation to clean up my lists. It is the most difficult feature for me because it uses drag and drop to build the workflows. I have only begun to explore its full capabilities.

Flowchart showing a process with two steps: adding a contact to the 'Newsletter subscribers' list and removing a contact from the 'Friends and family' list, culminating in an exit point.

Final thoughts

I really like this application. The only thing I wish I could do is embed the form that allows subscribers to modify their information into my website. I can include it in the newsletter which works for now. This is a minor inconvenience. I would recommend Brevo for individuals like me and small organizations who need these capabilities. There are many more features available in the free version and even more in the paid tiers.

Check out my website and subscribe to the newsletter.