All posts by gestrich

AI for Developers: Pitfalls and Patterns For Production Code

AI is a genuine force multiplier for developers, as I’ve written about before. It can help you communicate, manage your codebase, and ship things faster.

But there’s not really any training on how to use it well, and it’s easy to pick up habits that are counterproductive. Here are some practices and pitfalls I’ve encountered, mostly from doing them myself.

AI Development

1. Your Content Should Still Be Your Own

Let AI edit your thoughts, refine your phrasing, help you research, help you brainstorm. But what you produce should still be your content.

Our expertise as developers is in the choices: design choices, technology choices, API decisions. AI is an input. It gives you options. The expectation is that you iterate with it, consider alternatives, and evaluate tradeoffs. AI may point out something you didn’t think of or present other options, but in the end, you chose that solution over the alternatives, and that’s what matters.

Practically, that means you own the code line by line. If AI inserts something you don’t understand, go look it up. Validate why it did that, whether it’s correct. Either you learn something new, or you find out the AI was wrong. Either way, the expectation is you chose the solution and are ready to defend it.

2. Give AI Context About Your Codebase

AI knows nothing about your code standards unless you tell it. Without documented conventions, styles, or architectural guidance, it’s going to look at whatever patterns already exist and repeat them, whether you like those patterns or not.

Create Agent skills. Write markdown files that describe how you want things done. Without that, every session starts from scratch: AI guesses, you correct, repeat. This is also why I find it less interesting when someone points a model at a cold codebase and demos a feature. That’s neat, but a codebase without documented skills and conventions isn’t really ready for AI yet. The interesting thing to me is what happens when the AI has real context to work with.

Without architectural direction, AI solves the immediate problem by extending whatever’s already there. A 2,000-line class becomes 2,500. It’s not thinking about structure, just shoehorning in the feature. This is how weird patterns start emerging that nobody asked for. Without skills pointing AI in the right direction, it’ll grow a messy architecture faster than a human ever could.

3. Build Feedback Loops for AI

AI should be able to verify whether it got the right result. Unit tests are great for this. But it can also mean a CLI interface that lets AI access your app’s internals, running the same algorithms from the command line. This is especially useful for UI developers who don’t have that by default.

The more ways AI can check its own work, the less you have to catch manually.

Communication

4. Never Answer Someone’s Question With AI

If someone asks you a question, never regurgitate an answer from AI that clearly isn’t your own thinking. That’s a starting point for an answer, maybe, but it shouldn’t be the answer. We’re still humans and we expect to talk to humans. If you don’t have time to answer their question, just send them the prompt. It’s the new “let me Google that for you.” 🙂

5. Leverage AI for Your Writing

If you have a 10-paragraph document that could be 3 concise paragraphs, and you haven’t used AI for that, you’re missing out. If you avoid writing because you hate it, you’re missing out even more. AI can take your complex thoughts and make them coherent with minimal effort.

Use it on Slack posts before sending them to a large channel. Use it to polish technical docs. My first drafts are never good. They always need to be tightened up and made more concise, and that’s where AI shines. Think of it as a free editor that’s always available. Just make sure to ask it to remove those em dashes 🙂

6. Clean Up Your AI-Generated Pull Request Summaries

A pull request summary is used by other developers to understand what your change actually does. If you’re not careful, AI will just spit out what looks like a commit log, and it’s obvious. When that happens, reviewers can’t tell what the true intent is. Start with some background on what the thing does and why, then get into the details. It’s fine to use AI for this, just make sure it tells the real story.

7. Ship Something Real

In the early days of AI, everyone was posting demos. “I built this awesome weather app in 20 minutes.” “I stood up an entire AWS backend in an afternoon.” And for a while, that was genuinely exciting. We were all figuring out what was possible. But people see through that now. We all know you can generate something that looks impressive in a demo but is non-production slop under the hood. The novelty has worn off.

The bar has shifted. What’s impressive now isn’t how fast you built it. It’s whether it actually ships. Whether someone’s using it. Whether you’re using it. Even if the audience is just yourself, putting something into production is a fundamentally different thing than spinning up a demo. I have dozens of unshipped projects myself, and I’ve had to catch myself more than once. Weeks spent on something that never saw the light of day. Your time isn’t free just because AI moves fast.

That’s not to say stop sharing what you’re building. Keep doing that. But lean into sharing what you shipped rather than what you scaffolded.

Rescue Your Dead Vibe-Coded App

Vibe coding lets you rapidly create fragile software. You prompt, it generates code, you see lightning-fast progress. In a few hours, you have something that kind of works. You’re posting on LinkedIn about building a “complex app in 3 hours.”

But once you reach a certain level of complexity, reality hits. Edge cases break things, adding features breaks other parts, and debugging becomes a nightmare. Your motivation crashes and most projects die here, abandoned with half-finished dreams.

I’ve found that what separates shipped projects from abandoned ones is knowing how to rescue the project. This article shares what I learned from rescuing my own dead vibe-coded projects by getting back to the software principles I skipped. Many of these are practices we’ve known for decades, and they work just as well for AI-generated code.

Remedies: How to Rescue Your Project and Ship

1. Cut Scope

This is psychologically hard to admit, but this is often the first place you should start: your original scope was probably too large.

Vibe coding makes it incredibly easy to add features. You ask the AI for one thing, it builds it, you get excited and ask for another, then another. It feels good in the moment. But when you try to rescue all of that code into something shippable, it feels insurmountable. So much code, so many features, none of it quite working.

Boil it down to the minimum thing you want to ship that has value. Save everything else for a later release by putting it behind feature flags, disabling the code, or deleting it altogether if you think you’ll never ship it. Come to this realization quickly and focus on the important parts. Applying the latter steps in this article will be much easier with reduced scope.

A small shipped app is infinitely better than a massive dead one.

2. Establish Standards and Architecture

You’ve already built the app, and the AI probably produced a mess. One feature uses classes, another uses functions, files are in random folders. We’ll get to fixing that later. But first, let’s document the ideal architecture you want for this app. Think of this as defining your target state: how you wish the code was structured, not how it currently is.

What to document:

  • Architecture: Choose a well-known pattern (Domain-Driven Design, MVC, unidirectional architectures, etc.) that the AI will be familiar with. Define your folder structure, module boundaries, data flow, and general naming conventions.
  • Coding standards: Document patterns to follow, code smells to avoid, and how to handle common scenarios (errors, validation, logging, testing). These will likely evolve as you refactor the app. You may not know what you need to document until you start reviewing AI’s output with discipline.

I find this is actually one of the fun parts of using AI. Documenting how you want apps to be written is a great intellectual exercise that challenges you to think through questions you may have never considered before.

You can use AI to generate these docs initially, but review and understand them. I put mine in docs/architecture/ and docs/standards/ and reference them while working with AI. Continually iterate on these as you build.

An important insight: these documents are generic guidance for the AI, not specific to your individual project. They define how you want the AI to write code in general (your preferred patterns, architectures, and standards). This means you can copy these docs between projects and reuse them. I maintain a set of these for different project types like Mac apps, Python apps, etc. that I drop into new projects as starting templates.

When you have architecture and standards, you maintain understanding of your codebase. You know where things go, how they work, and why they’re structured the way they are. This is what keeps you motivated when things get hard.

3. Refactor with Your Standards and Architecture

Now that you have documented standards and architecture, it’s time to actually use them. This step is about refactoring your existing codebase to match the patterns and structure you’ve defined. This will get the app into a shape that you understand and that is hopefully more robust and easier to maintain going forward. This is prime candidate work for using AI.

Start with architecture, as that’s the most important piece. Reorganize your folder structure, establish clear module boundaries, and ensure data flows the way you’ve documented. Once the architectural foundation is solid, you can work on updating the code to follow your standards over time.

This is a piece you’ll be particularly proud of when you’re done, transforming chaos into something intentional and well-structured.

4. Build Feedback Loops

One of the reasons your project may not be shippable is that the AI was building without a way to validate how it was doing. The AI needs to know when its changes break things, and you need to know you’re not introducing regressions. Feedback during software development is not a new concept but matters even more with AI-generated code.

How to provide feedback:

  • Validation Against Standards – After creating a small feature or fixing a bug, have the AI validate its work against the standards and architecture docs you created earlier. This ensures it followed your patterns and conventions. AI tools often have hooks that can automate this validation step.
  • Testing – Use unit tests for individual functions, integration tests for how components work together, and end-to-end tests for complete user workflows. Tests catch regressions and verify the system actually works as intended.
  • Logging and error handling – AI can read log output as it’s running things to see what’s actually happening. Log key operations, state changes, and decision points. AI often generates happy-path code, so ensure every operation that can fail has explicit error handling with clear messages.

5. Review All AI-Generated Code

You’re probably not going to review all the existing code. That ship has sailed. But mark a line in the sand: all new code created during the rescue will be reviewed, every commit and every change. This is part of your strategy for making things better.

Why? Because in nearly every AI-generated change, you’ll find something you want different. Without review, you’ll miss these issues and accumulate more technical debt. Just as importantly, reviewing keeps you connected to your architecture and what’s actually in your codebase.

Conclusion: Ship It

You don’t need to abandon these projects. By cutting scope, establishing standards, refactoring to match them, building feedback loops, and reviewing code, you can rescue that dead vibe-coded app and actually ship it. And on your next project, you can apply these more disciplined approaches from the start and build something maintainable.

Balancing Deep Work with Everything Else

What Is Deep Work?

Deep work is when you spend long, uninterrupted stretches of time focusing on hard problems that require your full attention. Writing code, planning a complex architecture, strategic planning. Not emails, not Slack, not the quick tasks you cross off your list for a momentary sense of accomplishment. Definitely not the paper airplane you built out of tissue paper.

Cal Newport introduced this term, and his book on the subject is worth reading. I’ve read it at least 3 times over the years!

These are the sessions where you grab a cup of coffee and get lost in your work, forget about time. Some call this flow, and for knowledge workers it’s often what makes or breaks a career.

The problem is that your job, coworkers, and life aren’t arranged around deep work. Things beg for your attention from every direction, everything feels like a minor emergency, and in the short term all of it seems more important than that deep work you keep putting off.

These are the most important things I’ve been experimenting with lately that are working well so far.

1: Arrange Deep Work in Sprints

When I’m working on a hard problem, I want to keep going until I’ve captured value. Making that breakthrough often takes more than a few hours, sometimes days to do it well. I find it best to plan for that. I call these sprints, loosely borrowed from the agile model.

I typically do an approximate estimate (“this will take me 2 days”) and sometimes I finish early, sometimes it takes a bit longer. Nobody is going to be able to work 100% on their sprint commitment. Maybe you can spend 2 hours, 4 hours, or 6 hours if you’re lucky. The idea is to commit to some non-negotiable amount you will do each day during that sprint.

At the end of a sprint, I want to feel like I did something important. I moved the work forward and took it to a new level. Now it’s time to stop and look around for what’s next.

2: Include a Random Day Between Sprints

Things will come up every day that you’re not expecting: follow-up emails, research projects, miscellaneous requests. If you can hit your deep work targets and still handle those things that day, great, do it. But that won’t always be the case. When it’s not, I defer them until the sprint is over. Then I take a “random day” to knock them all out.

A random day is when I go through the list of smaller things I deferred. They’re their own mini projects. Sometimes I can do 5 in a day, sometimes 20 depending on the size.

By the time I finish a sprint and have that tremendous sense of accomplishment, those “death by a thousand cuts” tasks suddenly don’t seem so bad. They become a welcome break from the intensity. I take the random day to clear the backlog, then roll into my next deep work sprint.

3: Stop Torturing Yourself with Recurring Tasks

Recurring tasks accumulate and will kill your deep work. Twenty minutes reviewing my calendar, an hour on pull requests, thirty minutes on metrics. Suddenly half your day is gone before you’ve touched your real work. If you want to spend 5 hours coding but have 2 hours of daily obligations, there’s no room left for unexpected emergencies. Something has to give: either you burn out trying to cram in your deep work, or you keep cutting it back until it barely exists.

You don’t need to repeat everything every day. I review pull requests only on Mondays, Wednesdays, and Fridays instead of daily. App metrics I check on Tuesdays and Thursdays.

I’ve surprised myself with how much more efficient I am when I create scarcity of time. And sometimes, problems resolve themselves when I’m not immediately jumping on them.


If you find yourself constantly shifting between small tasks while your most important work never gets done, it might be time to get serious about protecting your deep work. Sustained focus on hard problems over time is what separates good work from great work.

Slack Etiquette: Confessions and Best Practices

We’ve all been there—someone on your team does something on Slack that just grinds your gears. Here are some common habits that might be annoying your coworkers (and you might not even know you’re doing them). Full disclosure: I’ve made many of these mistakes myself.

The Basics

Post, Then Ghost – Don’t ask a question and immediately go offline. You started the conversation—stick around a few minutes to continue the discussion.

Don’t Change the Subject – Someone posts something important, and you immediately pivot to something only semi-related? Not cool.

Complete Your Profile – An incomplete profile slows everything down. If people don’t know who you are or what team you’re on, they don’t know how to respond.

Timing Matters

Respect Do Not Disturb – If someone has set DND status, don’t push through it unless it’s truly urgent.

Time Your Messages – Slack has a scheduling feature—use it. Don’t send direct messages during someone’s offline hours.

Thread Management

Respond with Threads – If someone asks a question in a channel, reply in the thread. Your response likely doesn’t deserve its own top-level post.

Keep Long Posts in Threads – Post a short message in the channel, then reply to yourself with the details.

Remove Wrong People – Figured out who you actually need to talk to? Start a new thread. Don’t keep dragging people into conversations they don’t belong in.

Delete Courtesy Replies – If someone deletes a post message, do them a favor and delete your reply too.

Message Quality

Don’t Just Say “Hi” – Avoid sending a “hey” or “hi” message and waiting for a response. Slack is asynchronous. Say what you need to say upfront instead of expecting a back-and-forth dialogue.

Slow Down on Edits – Making three corrections in the first minute after posting? Take a breath before hitting enter. (I’m most guilty of this).

Use the Snooze Feature – Drowning in messages? Reply to what you can, snooze what needs follow-up.

Be a Human Being

Don’t Bury Posts – Someone just shared a celebration or recognition? Don’t immediately post something else and push their content up where no one will see it.

Recognize Good Work – When something good happens or someone does something well, acknowledge it. Throw an emoji on there. People will notice their supporters and repay.

No Negative Emojis – Don’t thumbs-down someone’s ideas unless they’re asking for a poll. That’s an offline conversation.

Arguments Belong Elsewhere – If you’re getting into it with someone on Slack, stop immediately and talk directly. Slack arguments never work out in anyone’s favor.

Align Your Slack Tone with Your In-Person Tone – Some people are so nice in person but come across as completely different people on Slack. Text doesn’t convey tone—be mindful of how you’re coming across.


Some of these are easier than others (I’m definitely guilty of a few myself). But being aware of these habits can make Slack a much better experience for everyone on your team.

CLI-Driven Development: Building AI-Friendly iOS and Mac Apps

I’ve been using Claude Code for several months now on many personal projects, and I’ve worked out some practices that work really well for me as an iOS and Mac developer. This CLI-driven development approach has fundamentally changed how I build applications with AI assistance.

The Problem with GUI-Based Development

One of the most powerful aspects of AI coding assistants is their ability to receive feedback and iterate on solutions. Give an AI a goal, and it can refine and improve until it gets there. However, if you’re an iOS or Mac developer, you’ve likely hit a wall: GUI interfaces are opaque to AI systems.

You could set up tooling to capture simulator screenshots, but this approach is slow and error-prone. By the time the AI gets a screenshot, analyzes it, and suggests changes, you’ve lost the rapid iteration cycle that makes AI assistants so valuable in the first place.

The other option is writing comprehensive unit tests. If you’re not doing test-driven development with AI yet, CLI-driven development is a nice stepping stone toward that goal. It has the added flexibility of interacting with real data—somewhat like an end-to-end test. Tests are still important, but this is another tool in your toolbelt for those not ready to go full TDD.

The goal is to give the AI the full context of your running application so it can fully interact with it.

Note: See the caveats section below regarding respecting user privacy and security when giving AI access to application data.


The Solution: CLI-Driven Development

CLI-driven development means architecting your application so that every use case accessible via the UI is also accessible via a command-line interface. The UI and CLI becomes access points to these use cases, rather than containing the business logic itself.

This isn’t a new idea. We’ve been told for years not to put business logic in view controllers or SwiftUI views. When working with AI, this separation becomes critical.

Benefits

  1. Better Architecture: Enforces separation between UI and business logic
  2. Faster Debugging: AI can identify and fix issues more quickly
  3. Faster Feature Development: AI has a way to give itself feedback
  4. Improved Testability: These principles make your app more unit-testable too
  5. Easier Data Migrations: AI can access and transform your real data

Architecture: Three Targets

I’m vastly oversimplifying the architectural requirements for a real application here. Folks will have their choice of patterns, frameworks, and approaches. I’m focusing on the minimal Swift package setup that will allow for this flow to work.

Think of your application as having three distinct targets within a Swift package:

  1. UI Target: Contains your SwiftUI views, view controllers, and UI interactions
  2. CLI Target: Handles command-line input/output
  3. Core Target: Contains all business logic, services, data interactions, and workflows

Both the UI and CLI targets are thin layers that simply pass data to the Core. When a user taps a button, the UI sends data to a service. When you run a CLI command, it does the same thing.

Shared Services in Core

This approach requires discipline you probably want anyway: every new feature or interaction needs a single method that can be called with all necessary parameters, with all the important logic happening outside your views. These shared services live in your Core target.

Here’s a brief example of a service to add a TODO:

Core Target – TodoService.swift:

public class TodoService {
    public func addTodo(listName: String, item: String) throws {
        // All business logic lives here
        let list = try getOrCreateList(name: listName)
        let todo = Todo(title: item, completed: false)
        list.todos.append(todo)
        try persistenceManager.save(list)
    }
}

CLI Target – main.swift:

// Parse command: ./MyAppCLI add-todo --list "Grocery" --item "Buy milk"
let service = TodoService()
try service.addTodo(listName: listName, item: item)
print("Added: \(item)")

Mac App Target – AddTodoView.swift:

struct AddTodoView: View {
    @State private var itemName = ""
    let listName: String
    let service = TodoService()

    var body: some View {
        TextField("Item", text: $itemName)
        Button("Add") {
            try? service.addTodo(listName: listName, item: itemName)
        }
    }
}

Both the CLI and Mac app call the exact same TodoService.addTodo() method. The business logic is shared, and both interfaces benefit from any improvements or fixes.


What Can You Do With This?

Debugging with AI

Let’s say you see an issue when adding a TODO to your Grocery list in the Mac app. Here’s how the conversation might go:

> I see a bug when adding a TODO to my Grocery list. Check it out.

Let me test this using the CLI:

$ ./MyAppCLI add-todo --list "Grocery" --item "Buy milk"

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - Buy milk
  - Buy milk

I see the issue - the TODO is being added twice. Let me check the add-todo implementation...

[AI fixes the duplicate insertion bug]

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - Buy milk

Fixed! The issue was in the data persistence layer where we were saving twice

The AI can use the CLI to reproduce the issue, see what’s happening, and iterate on fixes until it’s working correctly.

Building Features Faster

When building a feature, you can give the AI requirements and have it verify its work. Here’s an example:

> Add a feature to mark TODOs as complete. When I mark a TODO as complete,
> it should show with a checkmark when I list them. Verify it with the CLI
> and make sure it works before you're done.

I'll implement the complete-todo feature. Let me build it and test it:

[AI implements the feature in Core target]
[AI adds CLI command for marking complete]
[AI updates list display to show completion status]

Now let me test it:

$ ./MyAppCLI add-todo --list "Grocery" --item "Buy milk"
Added: Buy milk

$ ./MyAppCLI add-todo --list "Grocery" --item "Buy eggs"
Added: Buy eggs

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - [ ] Buy milk
  - [ ] Buy eggs

$ ./MyAppCLI complete-todo --list "Grocery" --item "Buy milk"
Marked complete: Buy milk

$ ./MyAppCLI list-todos --list "Grocery"
Grocery List:
  - [] Buy milk
  - [ ] Buy eggs

Perfect! The feature is working. TODOs now track completion status and
display with checkmarks when completed.

The AI builds the feature and verifies it works before reporting back. In theory, you’ll see the same results in the UI application.

Data Migration and Transformation

During early iterations, you frequently need to migrate data when properties change. With a CLI, you can ask the AI to transform all your proof-of-concept data because it has access to all the APIs necessary to read and write data.

> Migrate my TODO data from the old JSON format to the new format.
> Make a backup first, then verify the results match exactly.

$ ./MyAppCLI export-data --output todos_backup.json
Exported 127 TODOs to todos_backup.json

[AI implements migration from old format to new nested format with IDs]

$ ./MyAppCLI migrate-data --from todos_backup.json --to-new-format
Migrating 127 TODOs...
Migration complete.

$ ./MyAppCLI verify-migration --old todos_backup.json --new current
 All 127 TODOs migrated successfully
 Data integrity verified

Migration complete. Backup saved at todos_backup.json.

Important Caveats

This is Not a Replacement for Unit Tests

Unit testing leads you toward similar practices of keeping services separate from UI. Following CLI-driven development actually makes your app more testable. I suggest using this approach for both unit testing and CLI-driven development. Don’t focus on CLI-driven development alone.

Security and Privacy Considerations

You must respect your customers’ privacy and security. Be mindful when giving AI access to data and understand your privacy requirements. The CLI should be treated as a development tool with the same security considerations as direct database access.

The Dangers of Mixing Locks with Core Data

Codebases accumulate patterns over time that don’t match up with current best practices. These patterns might have made sense when they were written, or they might just reflect how our understanding has evolved. Before you can address  these patterns, you need to spot them and understand why they’re risky.

One particularly dangerous combination in iOS codebases is mixing locks (@synchronized, NSLock, etc.) with Core Data’s performAndWait. These patterns were used to maintain synchronous operation patterns, but together they create hidden cross-thread dependencies that lead to deadlocks, freezing your app.

This shows exactly how these deadlocks occur, so you can recognize and avoid them in your own code.

A Simple Shared Class

Let’s start with a basic class that manages some shared state. This shows a common pattern from before Swift concurrency – using dispatch queues to manage background work. This class might be accessed from multiple threads:

  • Main thread: reads the operation status description
  • Background thread: Starts a background operation
class DataProcessor {
    var currentOperationIdentifier: String = ""
    var currentOperationStatus: String = ""

    // Called from main thread
    func getDescription() -> String {
        return "Operation \(currentOperationIdentifier) has status: \(currentOperationStatus)"
    }
    
   // Called from background thread
  func startBackgroundOperation() {
      currentOperationIdentifier = "DataSync"
      currentOperationStatus = "Processing"
      // Do processing
  }
}

The Problem – Race Conditions

When dealing with multiple threads, execution can interleave unpredictably. One thread executes some code, then another thread slips in and executes its code, then back to the first thread – you have no way of knowing the order.

Here’s what can happen:

Background ThreadMain Thread
currentOperationIdentifier = “DataSync”
(about to update status…)
getDescription()
reads identifier → “DataSync” ✓
reads status → “Idle” ❌ (old value!)
currentOperationStatus = “Processing”
❌ too late – main thread already read old value

The main thread ends up with the new identifier but the old status – a mismatch that leads to inconsistent data.

There are better solutions to this problem – like bundling related state in one immutable structure, or using actors in modern Swift. But in legacy codebases, synchronous locks were a common strategy to protect shared state.

Adding Locks for Thread Safety

The lock creates “critical sections” – ensuring we either write to both properties OR read from both without other threads interfering.

class DataProcessor {
    private let lock = NSLock()
    var currentOperationIdentifier: String = ""
    var currentOperationStatus: String = ""

    func getDescription() -> String {
        lock.lock()
        defer { lock.unlock() }
        
        return "Operation \(currentOperationIdentifier) has status: \(currentOperationStatus)"
    }

    func startBackgroundOperation() {
        lock.lock()
        defer { lock.unlock() }

        currentOperationIdentifier = "DataSync"
        currentOperationStatus = "Processing"
    }
}

So far, this works fine. The locks protect our shared state, and both threads can safely access the properties.

The Deadlock – When Locks Meet Core Data

Now let’s assume we want to store this data to Core Data. This is where things get interesting.

When sharing Core Data across threads, you can run into race conditions just like we had earlier. So you need to use the right APIs to protect the critical sections too.

Your go-to is performBlock – it asynchronously performs the work safely. However, there are cases in legacy code where the caller needs to do something synchronously using performAndWait. When you call performAndWait on a main queue context, it blocks the calling thread until the block executes on the main thread. Think of waiting on the main queue as our “lock”.

Let’s assume some developer in the past (who definitely isn’t you) decided to use performAndWait here:

func startBackgroundOperation(with context: NSManagedObjectContext) {
    lock.lock()
    defer { lock.unlock() }
    
    // Assume the main thread tries to call
    // getDescription() at this point. 
    // It is blocked as we are holding the lock

    currentOperationIdentifier = "DataSync"
    currentOperationStatus = "Processing"

    // 💀 DEADLOCK HAPPENS HERE
    context.performAndWait {
        saveDataToStore(context: context)
    }
}

Why Does This Deadlock?

There’s a problem:

  • performAndWait needs the MAIN THREAD to execute this block
  • The MAIN THREAD is blocked waiting for our lock (in getDescription)
  • We’re holding that lock and won’t release until performAndWait completes

CIRCULAR WAIT = DEADLOCK

Timeline of the Deadlock

Background ThreadMain Thread
lock.lock() ✅
Updates propertiesCalls getDescription()
Still holding lock…             lock.lock() ❌ WAITING…
Waiting on performAndWait() (needs main thread)Can’t process – stuck waiting on lock!
  • Main thread: stuck in lock.lock() waiting for background thread
  • Background thread: stuck in performAndWait waiting for main thread

 How to Fix This Deadlock

The best solution is to eliminate performAndWait entirely and use the asynchronous perform instead. This breaks the circular dependency because the background thread no longer waits for the main thread:

func startBackgroundOperation(with context: NSManagedObjectContext) {
    lock.lock()
    defer { lock.unlock() }

    currentOperationIdentifier = "DataSync"
    currentOperationStatus = "Processing"

    // ✅ No deadlock
    // doesn't block waiting for main thread
    context.perform {
        self.saveDataToStore(context: context)
    }
}

If you absolutely cannot eliminate performAndWait, you’ll need to carefully analyze all lock dependencies, but this is error-prone and hard to maintain. The real fix is embracing asynchronous patterns.

What We Learned

In this article, we’ve seen how mixing locks with Core Data’s performAndWait creates a classic deadlock scenario:

  1. Race conditions can occur when multiple threads access shared mutable state
  2. Locks were traditionally used to protect this shared state with critical sections
  3. performAndWait works like a lock but requiring the main thread to execute
  4. When a background thread holds a lock and calls performAndWait, while the main thread is waiting for that same lock, we get a circular dependency – neither thread can proceed

Coming Up



Future articles will explore other ways you can hit or avoid these deadlocks:

  • Child contexts with read operations – Why using a child context doesn’t save you from deadlocks during fetch operations
  • Child contexts with write operations – How save operations on child contexts create the same circular dependencies
  • Private Contexts – Why private contexts with direct store connections are less likely to lock up

How I Use Voice and AI to Turn Messy Thoughts Into Clear Plans

When I was a teenager, I got really into philosophy. I’d sit at my desk with blank paper (this was before smartphones), scribbling down every half-baked thought about existence and consciousness. Whatever rabbit hole I’d fallen into that week.

I realized that brainstorming on paper forced me to actually think. All those “profound” ideas bouncing around my head? Half of them were nonsense after I’d written them down. The other half started making more sense than I expected.

But I kept trying to organize my thoughts while brainstorming, which defeated the whole purpose. I needed that messy exploration phase, but the structure kept getting in the way.

So I started talking through ideas out loud. I could work through ideas while biking or driving, no structure needed. Just raw thoughts. No stopping to fix sentences, no fiddling with formatting.

Problem was, what do I do with 30 minutes of rambling? Record, listen back and take notes? Those recordings just sat there, full of a few good ideas I never actually used.

Then transcription and AI came along.

Now I can have the same stream-of-consciousness voice sessions, dump the transcript into Claude or ChatGPT, and get a structured plan back. Talk freely, get organized output.

How I Actually Do It

Here’s what I do when I need to work through something:

  1. Hit record and brain dump: Apple’s voice recorder, a few minutes but sometimes as long as 1 hour. Start with the problem, then just go. Questions, angles, contradictions, all of it.
  2. Let it wander: I start talking about some ideas and often end up somewhere unexpected. Ideas build on each other. What starts as chaos usually ends with clarity.
  3. Feed the transcript to AI: Apple transcribes it, I give it to Claude or ChatGPT. The AI follows my rambling and pulls out what matters.
  4. Quick cleanup: Sometimes I’ll record myself reviewing the output with changes. Or just make a few quick edits. Usually minimal.

Team Brainstorming Gets Crazy Good

This gets even better with teams. Record a team brainstorming session (with permission, obviously). Not for meeting notes, but for AI to turn the raw thoughts into a comprehensive plan.

Weird thing happens when everyone knows AI will form the first draft of the plan: people actually explain their thinking. We spell out assumptions. We say why we’re making decisions. Someone will literally say “Hey AI, make sure you catch this part…” and we all laugh, but then we realize we should be this clear all the time.

No one’s frantically taking notes. No one’s trying to remember who said what. We just talk, explore tangents, disagree, figure things out. The AI sorts it out later.

Where It Gets Wild: Voice-to-Code

Real example: On an open source project recently, we were discussing background processing in iOS. Background tasks? Silent push? Background fetch? Everyone’s got ideas, no one actually knows. Usually this ends with “let’s spike on it” and one week later, we’ve explored one or two of the concepts, we’re already committed to the first or second idea and not really sure.

This time we recorded the whole messy discussion. All our dumb questions: How often does BGAppRefreshTask actually fire? What’s the real time limit? Does anything work when the app’s killed?

Fed the transcript to Claude asking for a demo app covering everything we discussed plus anything we missed. The idea was to create a demo that confirms assumptions. We really don’t care what the AI’s opinion is of how things may work – give us something real we can confirm it with.

An hour later we had a working sample app. Each tab demonstrating a different approach with detailed event logging in the UI. We install it, we watch what actually happens.

After a few hours experimenting with the app and reading the code, we understood how these APIs actually work, their limitations, and which approach made sense.

Why This Works so Well

I get clarity this way that doesn’t happen otherwise. Talking forces me to think linearly but lets ideas evolve. AI adds structure without killing the exploration.

Might work if you:

  • Get ideas while walking or driving
  • Find talking easier than writing
  • Edit while writing kills your flow
  • Need to explore without committing

A Simple Checklist for Debugging Regressions

I’ve been thinking about a process I’ve used for resolving regressions that may be useful to share. I’ve never explicitly written the steps down before, but figured it was worth capturing—both for myself and others.

When a regression shows up, there are three questions that I’ve found you have to answer. Skipping any of them usually leads to confusion, wasted time, or a fix that doesn’t actually solve the real problem. But if you take the time to work through them, you can usually land on the right answer with confidence.

1. Can you reproduce the problem?

A lot of engineers want to jump straight into the code. That’s the fun part, right? Digging through logic, inspecting diffs, reasoning your way to a fix. But if you can’t reliably reproduce the issue, studying the code is usually a waste of time. You don’t even know where to look yet.

Reproducing the problem is the first real step. It’s not glamorous, and it can feel a little silly—especially when you’re trying the same steps over and over with tiny variations. But this is one of the most valuable things you can do when a bug shows up.

As engineers, we have a special vantage point. We know how the code works, and we often have a gut instinct about what kinds of conditions might trigger strange behavior. That gives us a real edge in uncovering subtle issues—so don’t think you’re above tapping on an iPad for hours or running the same test over and over. It’s our duty to chase it down.

Once you have a reliable repro, everything gets easier. You can try fixes, stress other paths, and most importantly, build real confidence that your solution works.

Some useful tricks:

  • Adjust timing, inputs, or state to help provoke the bug
  • Script setup steps or test data to save time
  • Loop the behavior or stress threads to make edge cases more likely

2. What changed?

This step is often skipped. People jump into debugging without first understanding what changed. But the fastest way to track down a regression is to compare working code to broken code and see what’s different.

This question can feel sensitive. It gets close to specific contributions that may have introduced instability. I’ve seen plenty of cases where the discussion gets deflected into vague root causes or long-term issues—anything but the specific change. That’s understandable. We’ve all been there. But avoiding the question doesn’t help. It puts the fix at risk and slows everyone down.

Some go-to techniques:

  • Review pull requests and diffs
  • Trace from crash logs or error messages to recently changed code
  • Use git bisect to find the breaking commit
  • Try reverting the suspected change and see if the issue disappears

Once you find the change, test your theory. If undoing it makes the problem go away, you’re on the right track.

3. Why does it happen?

Knowing what changed isn’t enough. You need to understand why that change caused the issue. Otherwise, you’re just fixing symptoms, and might miss deeper problems.

This is where the real problem solving happens:

  • Read documentation for the APIs or system behavior involved
  • Think through the interaction between components or timing
  • Build a mental model and prove it with experiments or targeted tests

You don’t want to ship a fix that works by accident. You want one that works because you actually understand the problem. That’s what prevents repeat issues and edge cases slipping through.

Wrapping up

These three questions — Can you reproduce it? What changed? Why does it happen? — have helped me find and fix bugs more reliably than anything else.

It’s easy to skip them under pressure. It’s tempting to merge the first thing that seems to work. But without answering all three, you’re flying blind. You might get lucky. Or you might end up wasting hours chasing your tail or shipping the wrong fix.

New Month’s Resolution

I’m skeptical of New Year’s resolutions, at least in the traditional way they are framed. The statistics are bleak; only 8% of people stick with their resolutions. I think a year is just too long.

Let’s consider the resolution to go to the gym 5 days a week. Things will be going well the first few days or weeks. But suppose your new gym rat friends let you know your plan is flawed. They suggest you should only train 4 days per week. What would this change mean for the resolution? Are you compromising if you cut back a day? Or let’s assume you have a minor injury, requiring a few weeks of rest. Is it game over now that you took some time off?

Whenever you start something new, you need to make many adjustments. A rigid plan made during the holidays probably isn’t going to hold up for the year. Your brain was likely in a planning fog anyways from too many cookies and bad holiday films.

As an alternative, let’s consider monthly resolutions. Basically these work just like New Year’s resolutions. Add a calendar reminder once a month to select some important goals. Work hard to stick with this plan for the next 4 weeks. When the new month arrives, it’s time to celebrate your success and think about what can be improved. You can either roll-over your same strategy into the new month or make adjustments from what you learned, or scrap it entirely and do something else.

I experimented with the monthly resolutions this month. As I write this I’m excited to conduct a post-mortem on the last month and incorporate the learnings into my January goals.

Take Control of Your Window AC Unit

While there are plenty of smart devices to control central heat and air conditioning, these will not usually work with portable units. Using a Raspberry Pi with a few simple electronics, I was able to upgrade the control of my window A/C unit.

At a minimum, I wanted to turn the A/C on and off from my phone. This is nice at the end of the day when I go to bed and realize I left the A/C running from the other end of the house. A remote control was included with the A/C unit but the range is obviously limiting. By coupling an infrared receiver to my Raspberry Pi, I was able to teach the Pi the infrared signals from the remote using an open source Linux package — LIRC. Then I connected an infrared transmitter to the Raspberry Pi to relay those same signals such as Power, Temperature +/-, Mode, etc… That could allow the unit to be controlled from anywhere.

While sending the remote control’s commands is useful, it also would be helpful to know whether the unit is on or off. For this I attached a vibration sensor that will notify the Pi when any vibration is detected in the last 5 seconds, which will be interpreted as the A/C is running.

Final Hardware Setup

This setup allows the Pi to communicate with the A/C unit, but I needed to communicate with the Pi. I opted for an Apple Shortcut that sends SSH commands to trigger the individual scripts. This is not the most elegant interface but it does the job.