Over the course of my career, I've been part of multiple initiatives to start internal communities at work. Some of them became something genuinely special — people showing up, contributing, looking forward to the next gathering. Others quietly died after a few months, victims of low attendance and dwindling energy. For a long time I couldn't figure out what separated the successes from the failures. Was it the topic? The timing? The right people involved? I kept searching for the formula. Then I listened to an episode of the ReThinking podcast last week where Adam Grant sat down with Dan Coyle — author of The Culture Code and his new book Flourish — and one thing Coyle said stopped me in my tracks. Community, he pointed out, literally means shared gifts . And shared gifts aren't something you passively receive. They're something you participate in . We've been thinking about it the wrong way Maybe you’ve tried to build an interna community before. You s...
Being new to Microsoft Fabric I noticed that you have multiple options when writing notebooks using Python: run your code with PySpark (backed by a Spark cluster) or with Python (running natively on the notebook's compute). Both options look almost identical on the surface — you're still writing Python syntax either way — but under the hood they behave very differently, and picking the wrong one can cost you time, money, and unnecessary complexity. In this post I try to identify the key differences and give you some heuristics for deciding which engine to reach for. Python vs PySpark: what's actually different? When you select PySpark in a Fabric notebook, your code runs on a distributed Apache Spark cluster. Fabric spins up a cluster, distributes your data across multiple worker nodes, and executes transformations in parallel. The core abstraction is the DataFrame (or RDD), and operations are lazy — nothing actually runs until you trigger an action like .show() ...