SQL, MDX, DAX – the languages of data

Ramblings of a retired data architect

Let me start by saying that I have been working with data for over thirty years. I think that just means I am old. Anyway, I have written blog posts, delivered presentations, and authored books on these languages through the years. Understanding and using these languages have grown and shaped my career through the years. I thought it would be fun to discuss my thoughts on each one. This is my take and some of my thoughts are definitely “tongue in cheek.” So, enjoy the ride and feel free to share your take in the comments.

  • SQL – ubiquitous and relational
  • MDX – complex and dimensional
  • DAX – formulaic and columnar
  • Thoughts and musings

SQL, structured query language

SQL is the oldest of the languages. It was designed to support relational databases (RDBMS). It is built on math principles to improve performance and optimize storage. Normalization rules were established to guide developers on the preferred approaches to building databases.

Why is SQL ubiquitous?

SQL is everywhere. SQL is the query language of choice for enterprise data platforms such as Microsoft SQL Server and Oracle. Open-source data platforms like MySQL and PostgreSQL are also built to use SQL.

How is this possible? SQL is an ANSI standard. This means that the core of the language is managed by a governing body. If you learn SQL, you should be able to write queries in all these databases, right? Sort of. You should be able to write a query like this in all the databases: SELECT field1, field2 FROM table WHERE field3 = 50.

However, vendors often implement their own variations of SQL to meet needs in their platform design or to provide nonstandard functionality to their users. (Engineering outpaces standards development.) For example, Microsoft created TSQL and Oracle created PL/SQL. One of my first experiences with this was returning a single row in query. I used TOP 1 in SQL Server, but there was no TOP keyword in Oracle.

Code examples

SQL Server

SELECT TOP 1 column1, column2 
FROM table_name
WHERE condition;

Oracle

SELECT column1, column2
FROM table_name
WHERE condition
AND ROWNUM = 1;

PostgreSQL / MySQL

SELECT column1, column2
FROM table_name
WHERE condition
LIMIT 1;

What does it all mean?

You can learn SQL and be efficient querying data from multiple data platforms. Whether you are a data engineer or a data analyst, you must know SQL if you are to be taken seriously as a data professional.

MDX, multidimensional expressions

I was introduced to MDX by SQL Server Analysis Services (SSAS). For me, it just clicked. More about that in a bit. MDX, like SQL, is heavily based on math. Whereas SQL is two dimensional (column and rows), MDX can theoretically use an unlimited number of dimensions. The number is limited in practice by the capability of the data platform. MDX was primarily used by two vendors, Microsoft and Hyperion.

One key difference between the two platforms is their purpose. Relational databases are optimized for transactions and small result sets. Multidimensional databases are built for analysis across huge datasets.

Why is MDX considered complex?

The toughest part for most data professionals is visualizing multidimensional datasets in their minds. Relational data is easy to visualize. It looks like a spreadsheet. Multidimensional data is not that simple. We call it a cube, but that is a simplistic representation with only three dimensions. It is a cool name though.

Earlier in my career I coached data consultants on their transition to BI consultants. As I helped a consultant with MDX, I told him at some point he would “get it.” I told him to call me when he did. Six months later he called me, told me that he got it, and hung up on me. Many consultants didn’t get it and either just forced their way through it or went back to relational.

MDX was designed to traverse dimensions, build sets, and aggregate values across those sets. I mentioned earlier that MDX made sense to me right away. The first time I was exposed to MDX, I learned about the various functions and methods to work with dimensions including child, parent, descendants, and ancestors. You could think of dimensions like family trees. I took a class in college about familial relationships which used similar concepts. My degree is in cultural anthropology.

  • Another difficult concept to master is context. You must understand the set or slice of data you are working with in a query.
  • Once you understand context, you need to realize that every measure is an aggregate of every dimension whether a part of the query or not.
  • Results can be shaped in multiple dimensions. Three or more dimensions cannot be visualized. If you want to visualize the data in a report, it needs to be formed into columns and rows.

Multidimensional databases and MDX are extremely powerful but complex. I enjoyed working with them and became one of the few experts in the technology. However, multidimensional databases and MDX are rarely used today. Microsoft is not advancing the technology, instead promoting columnar data structures.

DAX, data analysis expressions

My first experience with DAX was when PowerPivot was released with Excel. It was then that I saw the writing on the wall for MDX. DAX is simpler and more approachable than MDX. Microsoft then added tabular models built on the same data engine, Vertipaq. Eventually culminating in the Power BI model. The underlying data engine is a highly optimized columnar data structure.

Admittedly, I have the least amount of hands-on experience with DAX. However, I disliked it early on. Unlike SQL and MDX, DAX is not built around math principles and is not a query language. It is built with expressions. Instead of SELECT, it starts with an equal sign (=). This is more intuitive for Excel users. Early on, it was very frustrating for me.

DAX is continually being improved. Microsoft is also continuing to improve the underlying data engine and storage subsystem. Power BI models are one of the foundational building blocks of Microsoft Fabric.

Should you learn DAX?

If your business uses Power BI, then yes. DAX is used to aggregate, shape, and format data for usage by end users. It is not necessary for data engineers who don’t present data to end users.

Thoughts and musings

My first recommendation is to learn SQL if you want to be taken seriously as a data professional. It has been around since the beginning and will be around for a while to come. Relational data platforms are integrating columnar data storage technology which gives SQL users access to the performance available in Power BI models.

While I contend that MDX is more powerful, I concur that DAX is more approachable. As MDX goes the way of COBOL, SQL remains the powerhouse. Learn DAX if you intend to use Power BI, otherwise don’t bother.

That is my 2 cents. Have a different opinion? Sound off in the comments below.

SQL Saturday Baton Rouge 2025

SQL Saturday Baton Rouge is a great event hosted on LSU’s campus that has some of the best food, presenters, and attendees in the country! Had some incredible questions for my session and learned a ton throughout the conference. Thank you to all the sponsors and volunteers who make this event happen year after year (16 years running!). It’s one of the first events I ever spoke at (https://dataonwheels.wordpress.com/2022/08/23/sql-saturday-baton-rogue-recap/), and I’m so excited I was able to speak here again!

Here’s a link to the event where you can find other incredible speakers and sessions! Pictured below are some of my SML (Saturday Morning Learning) buddies – Dominick Raimato, Jason Romans, Kevin Pereira, and me! Thanks Sheila Romans for the great picture of us nerds!

My session – Mastering Microsoft Fabric Data Warehouses Tips & Tricks

Thanks to everyone who came out to my session and came with me on a journey to kill Pippin’s long-running query, create case insensitive warehouses, and monitor our warehouse and lakehouse queries on a single pane of glass. You guys were such an engaged audience and I loved being able to laugh and commiserate about our SQL experiences together!

You can find the presentation and resources here – https://github.com/Anytsirk12/DataOnWheels/tree/main/2025%20Presentations/2025%20SQL%20Saturday%20Baton%20Rouge

Jason Romans – How to Diagnose a Slow Power BI Report

Jason gave an awesome session about how to use external tools to diagnosis and even proactively solve a slow Power BI Report. Check out his session live if you get a chance, lots of great slides and demos! Below is the abstract from his session, be sure to follow the DAX Shepherd.
https://thedaxshepherd.com/presentations/

Users complain that a Power BI report is so slow that it is not worth using. A Power BI report may not perform well for many reasons, but how do you know where to start? In this session, I will show you some of the main tools I use to dive deep into the common and the not-so-common performance issues I run into. Starting from a slow report, we will learn how to diagnose where the problem is. Beginning with Performance Analyzer in Power BI, we can start to figure out where the issue is. Is it the visual that is slow, or are the DAX queries taking too long?

Next, we will use DAX Studio to tune the DAX queries that are being generated. Along the way, we will look beneath the visuals to see how Power BI is architected so we can better understand how to write better code from the start. We will examine Query Plans and Server Timings to evaluate which of our changes are more beneficial.

Lastly, we will examine methods for detecting and preventing performance issues before they occur. Using Tabular Editor, we will learn how Best Practice Analyzer can detect performance issues with your semantic models.

Kevin Feasel – Applying Forensic Accounting Techniques using SQL and Python

Below are my stream of conscious notes from this session, it was a ton of fun to think of forensically investigate a SQL environment using python. Session was packed with great examples and was an absolute blast to play detective. Kevin runs curated SQL which is an awesome site to get a “curated” list of recent blogs with incredible insights across the data community.
https://curatedsql.com/

Forensic accounting deals with primarily investigation and litigation. Goal is to connect the dots on potential fraud in a way that will stand up in court.

Step 1 – Summary Analysis

  1. Find largest table in terms of records or size
  2. For largest table, review data types and keys
  3. Look at foreign keys off that largest table and do the same analysis over and over.

You end up with a database model. That gives you a high-level view of the database.
There will be some python notebooks available in the future that will walk through how to explore a sample database.

Pandas has a command called df.describe(include=’all’) that will give you stats on a dataframe. Pretty neat way to easily get some info on numeric values especially. Note – you should convert the date fields to be dates within the dataframe (likely default to objects) then you’ll get similar stats for the date columns.
Pandas also has good ways to easily group, count, and sorting (also able to do this in SQL) so you can easily see what the top artifacts are within certain categories. One example is number of vendors with certain expense categories. For example, maybe only one Vendor is selling cleaning supplies. There may be a reason for it, but in an auditing scenario, that is a good place to start looking for preferential treatment and fraud.

Step 2 – Growth Analysis

This looks at changes in ratios over time. Goal is to look for major changes which look to be far outside the norm. Typically this is performed against important additive measures like revenue, cost, number of customers, number of orders, and even active users as a percent of total users (a “fixed cap” ratio).

We can perform an eyeball test at this phase. We can also do comprehensive analysis using KS and PPS tests. Seaborn library in python allows you to easily create lineplots and other visuals to see how additive measures perform over time. This allows you to investigate and gut check growth over time. This can be particularly interesting if you expect a linear increase or decrease but see something more random.

Min max scaler inside of sklearn library (preprocessing) can rescale everything on a range from 0 – 1 to allow you to really compare growth of two very different scaled items.

Gap analysis – sometimes the lack of value can be more interesting. Two meanings – review of controls and where things can go wrong & what data is missing.
Controls – do people have access to edit logs, who can do what? Does data change and do we know when it does? Are we tracking failures? Double-checking calculations on reports.
Gaps and Islands – gaps in data. Gaps are elements in a sequence which are missing. Islands are contiguous sets of data up to and not including the gap.
Accountants care about sequences that should be complete like receipt numbers, purchase orders, check numbers, invoice numbers, etc. Example check 1003 is missing, your accountant wants to see the voided check.

How to solve gaps and islands. Use DENSE_RANK() to build an ordering then calculate a difference between actual value and the DENSE_RANK() to find where the gap pops up and the islands get created.

Warning with gaps – SQL will typically have gaps in IDENTITY. This happens often with rollbacks where identity value gets burned. This also applies to SEQUENCE. Building a gapless sequence is a huge hit on capacity which is why it’s not default. It forces single row insertion. Creates serializable isolation level.

Regression Analysis

Contains a dependent variable (the thing we want to explain). For example, count of invoices per month, count of invoices is the dependent variable. Dependent variable must be continuous, countable feature. Can only be one dependent variable in a regression. Multivariate multiple regression is a thing but it’s really a series of regressions. We have one or more independent variables that we think will explain the dependent variable (time, people, customer, location, etc.). We assume the independent variables will drive the dependent variable but not other independent variables. Data should ideally have a good amount of variety in the independent variables. Ideal is 30-100 rule of thumb (data points per combination of independent variables). More data is better. We also assume that the past behavior is similar to future behavior.

sklearn.linear_model import LinearRegression and sklearn.metrics import mean_square_error from python make conducting a linear regression super easy to implement. You need to fit the model to see how the dependent variable is impacted by the provided independent variables. sm_model.sumary() will give you the OLS regression results. Be sure to use what you learned from your gut check line charts. For example, if there’s a significant deviation from the line in a certain year, try building the linear regression for before and during that significant difference occurs and compare that to what you get after. This may also help you find the true outliers that led to the craziness overall.

Cohort Analysis – looking at top-level aggregates or by a single slice might help identify problems. Differences should be able to be explained by domain experts.

  • Duplication of entries
  • frequency of one thing versus another
  • changes in frequency over time
  • shape of numeric distribution

Intuition on duplication

  • People don’t like to perform more work than necessary
  • when given an option between more work and less work, people will generally choose to less work
  • duplication is usually considered more work
  • if the low-work is easy to understand, people are more likely to gravitate to it ceteris paribus

These are not rules of nature, just intuitions.

Digit analysis – you should expect a uniformed distribution of the last digit in a sequence. There are exceptions. Budgets usually are rounded off and end with 5s or 0s. Estimates in general are also rounded to 5 or 0. Less precise measurements often end in 0.

When people try to fake data, they tend to go with “more random” numbers like 2, 3, 7, and 8. Benford’s law – unlike the final digit, the first digit of a sequence often follows a power law because 1 is more likely to occur than 2, etc. Works best when there’s 2-3 orders of magnitude or more. Pretty neat, definitely look up Benford’s law.

Python library is called beautiful soup for this. Pretty cool.

Keep records of your findings. As you dig, you will want to have that data readily available for future reference.

Jonathan Stewart – Building scalable data warehouses in Fabric

Jonathan walks through the basics of building a data warehouse that can easily grow without taking down capacity. He walks through best data integration options, star schema principles, and some great data modeling best practices. Column selection, JOIN, and aggregation optimization techniques were the focus.

V-Ordering, when should you do it? It’s enabled by default for read optimization on the data warehouse but off by default for data lakehouses to prioritize data timeliness.

  • Direct Lake Semantic models
  • If user experience is priority over data timeliness

Size Optimization Strategy = choose the smallest data type that supports your data requirements. Plan for scaling, but realistically. A column that is yes/no will never have more than 3 characters so it can be a VARCHAR(3).

His abstract:

Learn how to design and implement scalable data warehouses using Microsoft Fabric. This session will cover best practices for data storage, management, and retrieval, as well as the use of Fabric’s data warehousing features. Attendees will gain insights into creating efficient data pipelines, optimizing performance, and ensuring data consistency.

SQL Saturday NYC Recap 2025

My first time at SQL Saturday New York City was an absolute blast! Thank you so much to the organizers and the incredible people who chatted with me during and after the event, the people always make these events worth the trip. Here’s the link to the event, feel free to connect with speakers that you wish you would have been able to hear from! You never know, they may give the talk near you sometime! https://sqlsaturday.com/2025-05-10-sqlsaturday1105/

My Session – Mastering Microsoft Fabric Data Warehousing: Tips & Tricks You Need to Know

Thank you so much to everyone who came to my session! I need to do some more tongue twisters to not mix up Warehouse and Workspace when I do this talk in the future but thank you all for laughing with me. There were some incredible questions and comments, and I loved the conversations that came from this session. This was my first time giving this session, so please feel free to reach out with any feedback you may have, I love evolving my sessions to be more useful to everyone who attends.

Here’s a link to my GitHub that has all the slides, monitoring scripts, and Power BI report used to monitor the queries: https://github.com/Anytsirk12/DataOnWheels/tree/main/2025%20Presentations/2025%20SQL%20Saturday%20NYC

Women in Technology Panel

This was also my first time speaking on a Women in Tech (WIT) panel! It was incredible to sit up there with some extremely talented women and discuss what “Dangerous Phrases” in business impact us and others in the community. The discussion was electric, and I loved hearing all the varying experiences we’ve had and discussion on how we can grow together to make an even brighter future. Can’t wait for the day when there’s so many qualified, incredible women speakers that we have lines in the rest room and a panel with all the women speakers at an event isn’t possible. For context, this event had 35 wonderful speakers, 6 of which were women (17%).

John Miner – Create an analytics foundation using Fabric Warehouse

GitHub link= https://github.com/JohnMiner3/community-work/tree/master/analytics-foundation-fabric-warehouse

Don’t be afraid to use GenAI to help rewrite scripts to be compliant with Fabric.

Definitely look at the make metadata notebook TSQL notebook, great structures to replicate when building a metadata driven approach. Even covers building a log for your ETL process! Super neat.

Paul Turley – Moving from Power BI to Microsoft Fabric

Slides: https://sqlserverbi.blog/presentations

With Fabric, you now have the option to push elements of BI upstream. Medallion architecture is a common way we move data from ingestion to cleaned up and ready for reporting.

Semantic modeling options

  • Vertipaq (import mode)
    • In-memory cache
    • refreshed
    • column store/compression
    • internal storage optimization is done with various formats of encoding
  • Direct lake
    • Optimized for Fabric lakehouse storage
    • similar characteristics to vertipaq
    • in-memory based on column use
    • compressed in parquet
    • data available within seconds of updates
    • v2 direct lake limitations will be eliminated in late 2025 – database views, calculated columns, mixed storage mode
  • Direct query
    • pass through to/from source
    • reads directly from the data source
    • DAX query generated by report and translated to source native query language
    • no model data on disk
    • no data cached in memory
    • no compression
    • row size limits
    • significantly slower in visuals

Don’t freak out, you don’t HAVE to change anything. You just have more options now.

Enterprise Principles:

  • Governance is king: ownership, business & IT
  • Traceability from source to semantic model & report – defendable numbers
  • endorsement & certs
  • scalable – storage, transformation, modeling, visual presentation
  • version control
  • continuous delivery
  • deployment

Version control – moving from Power BI to Fabric isn’t about what you have to do, but what you git to do lol

What do I have to do with Fabric?

  • Replace PBI Premium capacities with Fabric capacities
  • consider fabric data engineering patterns in the future

That’s it! You have more opportunities, but that’s all you HAVE to do.

How-To Convert Excel Date & Time to SQL

Hey data friends! This one comes from my personal vault (aka backlog of drafts I’ve been needing to write up) and is a really simple code that I always forget how to do. So, to save us all some ChatGPT-ing, here’s my tried-and-true way of converting Excel Date & Time fields to a true Date & Time in SQL.

Let’s say you have the following Excel table (don’t worry these people are not real) and are loading it into SQL Server through an ETL process.

When the data gets to SQL, you realize that the call timestamp is coming in as a number! How do you get this to a proper date time field in SQL?

The most accurate way to get the date and time out of this field is to split the field on the period and calculate the date separately from the time. We will use a CTE to split the date and time fields then process the changes.

WITH separate_date_time AS (
	SELECT 
	*
	,SUBSTRING([Call Timestamp],0 --SUBSTRING allows us to get a portion of the text starting at the beginning (the 0) and ending before the '.'
		,CHARINDEX('.',[Call Timestamp]))  'CallDate' --CHARINDEX allows us to get the location of the '.' so we know where to end our substring selection
	,SUBSTRING([Call Timestamp],CHARINDEX('.',[Call Timestamp])   --for the time, we will start at the '.' and grab everything after that
		,LEN([Call Timestamp]))  'CallTime'
	FROM dbo.ExcelUploadDemo
)

SELECT 
ID 
,[Call Timestamp]
,CallDate
,CallTime
, DATEADD(D,CAST(CallDate AS INT), '1899-12-30') AS [CleanedCallDate]
, DATEADD(SECOND,ROUND(CAST([CallTime] AS float) * 86400, 0), CAST('00:00' AS TIME)) AS [CleanedCallTime]
, DATEADD(D,CAST(CallDate AS INT), '1899-12-30') + DATEADD(SECOND,ROUND(CAST([CallTime] AS float) * 86400, 0), CAST('00:00' AS TIME)) AS [CleanedCallDateTime]
FROM separate_date_time

Little note, you need to round because occasionally a :00 second value will be converted to 59 seconds of the previous minute. Happy coding friends!

Dynamically Unpivot columns in SQL

Picture this, your data ingestion team has created a table that has the sales for each month year split into different columns. At first glance, you may think “what’s the big deal? Should be pretty easy, right? All I need to do is unpivot these columns in Power BI and I’m good to go.” So you go that route, and the report works for one month. Next month, you get an urgent email from your stakeholders saying they can’t see this month’s numbers. That’s when you realize that this table will grow with new columns every month. That means that any report you make needs a schema refresh every single month. Unfortunately, Power BI will not grab new columns from a table once it’s published into the online service. The only way for the Power Query to pivot the new columns is for you to open the report in your desktop, go to Power Query, and refresh the preview to get all the columns in that table.

If you’re like me, you may not have time (or the will) to manually download, fix, and reupload this report each month. But have no fear! SQL is here! The unpivot function in SQL can feel like a black box, designed for only black belts in SQL to use, but we will break it down together and create a dynamic unpivot script.

What does unpivot in SQL accomplish? Unpivot takes multiple rows and converts them to columns. The general syntax is as follows:

SELECT 
originalColumn1, originalColumnHeaders, originalValues
FROM 
(SELECT originalColumn1, originalColumn2, originialColumn3, originalColumn4 FROM dbo.Table) T 
UNPIVOT (originalColumnHeaders FOR originalValues in (originalColumn2, originialColumn3, originalColumn4)) as UP

Let’s break this down.

The first SELECT statement contains the column we don’t want to unpivot plus the two names of what we want our final columns to be. For example, using our table below, we want to keep the country column as is, but we want to unpivot those sales months and the sales values. So, we may name the originalColumnHeaders as SalesMonth and originalValues as Sales. You can use whatever you’d like, but it has to match what’s in the UNPIVOT statement.

The SELECT after FROM tells the query where to get data from. By default, SQL requires this to be hard-coded, but don’t worry, we have a way for that to be dynamic.

The UNPIVOT statement requires us to provide two new column names, the first will be what you want the column full of the old column headers to be called. In our case, this will be SalesMonth. The second will be the column full of the old values within those columns (aka Sales).

The hard-coded version of our example would look like this:

SELECT 
country, SalesMonth, Sales
FROM 
(SELECT country, sales_jan_2024, sales_feb_2024, sales_mar_2024, sales_apr_2024, sales_may_2024 FROM dbo.Table) T 
UNPIVOT (SalesMonth FOR Sales in (sales_jan_2024, sales_feb_2024, sales_mar_2024, sales_apr_2024, sales_may_2024)) as UP

The result of this code looks pretty good! Now we can easily pull the month and year out from SalesMonth and have some data we can trend. But we still have that pesky hard coded problem.

To get around the hard coding, we have to do some SQL jujitsu. We will use a couple of variables that can be dynamically populated then execute the SQL as a variable using sp_ExecuteQueries. This is a bit easier to explain with in-code notes, so feel free to look through this query and read the notes that go along with it for details on how it works.


--Use this script to create the table for the demo
/* --Comment this line out to run the create table code
DROP TABLE IF EXISTS dbo.PivotedSales
CREATE TABLE dbo.PivotedSales (
	country VARCHAR(250),
	sales_jan_2024 DECIMAL(17,3),
	sales_feb_2024 DECIMAL(17,3),
	sales_mar_2024 DECIMAL(17,3),
	sales_apr_2024 DECIMAL(17,3),
	sales_may_2024 DECIMAL(17,3)
);
INSERT INTO dbo.PivotedSales (country, sales_jan_2024, sales_feb_2024, sales_mar_2024, sales_apr_2024, sales_may_2024)
VALUES 
	('South Africa', 111.22, 222.33, 333.44, 444.55, 555.66),
	('Canada', 112.22, 227.33, 332.44, 400.55, 500.66),
	('United States', 113.22, 228.33, 330.44, 401.55, 501.66),
	('Mexico', 114.22, 229.33, 334.44, 404.55, 504.66),
	('Ireland', 115.22, 230.33, 335.44, 409.55, 509.66),
	('Germany', 116.22, 231.33, 336.44, 499.55, 599.66),
	('South Africa', 1011.22, 2022.33, 3303.44, 4044.55, 5505.66),
	('Canada', 1102.22, 2027.33, 3302.44, 4000.55, 5000.66),
	('United States', 1103.22, 2280.33, 3030.44, 4001.55, 5001.66),
	('Mexico', 1104.22, 2209.33, 3034.44, 4004.55, 5004.66),
	('Ireland', 1105.22, 2300.33, 3305.44, 4009.55, 5009.66),
	('Germany', 1106.22, 2310.33, 3036.44, 4909.55, 5909.66);

--*/

--Original table
SELECT * FROM PivotedSales


--Set up parameters
;DECLARE @UnpivotColumns NVARCHAR(MAX), @FilterPefix NVARCHAR(MAX) 
-- FilterPrefix is the prefix that all the columns we want to pivot have. This will allow us to dynamically grab any new columns as they get created
SET @FilterPefix = 'sales%' --Note, you can adjust this to be a suffix or a middle string if needed by moving the % (wildcard)


--This section sets our @Unpivot column variable to be a comma separated list of the columns in our table with the FilterPrefix
SELECT @UnpivotColumns = STRING_AGG(CONVERT(NVARCHAR(MAX), C.name),',')
FROM sys.columns C 
INNER JOIN sys.types T ON T.user_type_id = C.user_type_id
WHERE C.object_id = object_id('dbo.PivotedSales') --this ensures we only get columns from our table
	AND C.name LIKE ''+@FilterPefix+'' --this makes sure only columns with the filter prefix are returned
	AND (T.name = 'decimal' OR T.name LIKE '%int%') --this ensures we only grab columns with a decimal or int type. You can adjust this if needed

SELECT @UnpivotColumns AS 'Unpivot Columns'


--This section creates a dynamic SQL statement using the comma separated list we just generated
DECLARE @ExecuteSQL NVARCHAR(MAX)
SET @ExecuteSQL = '
SELECT Country, SalesMonth, Sales
FROM 
	(SELECT Country, ' + @UnpivotColumns + ' FROM dbo.PivotedSales) P
	UNPIVOT
	(Sales FOR SalesMonth IN (' + @UnpivotColumns + ')) as S'
SELECT @ExecuteSQL AS 'Dynamic SQL Script' --this will show you the SQL statement we've generated


--Finally, we will use the system stored proc sp_executesql to execute our dynamic sql script
EXECUTE sp_executesql @ExecuteSQL

Now that is a lot of code, so be sure to look at each chunk and make sure you adjust it for your use case. Happy coding folks!

Huge credit for this solution goes Colin Fitzgerald and Miranda Lochner! Thank you both for sharing this with me!