Side-by-Side Variable Comparison for Snapshot Debugging
When you’re debugging a tricky issue in a distributed system, “what changed?” is often the most important question.
You add logs, you capture data, you redeploy, and suddenly your browser is full of open tabs, copied JSON blobs, and screenshots of log lines. Comparing behavior between two requests, two users, or two releases turns into a manual, error-prone chore.
Lightrun Snapshots were built to fix the data collection side of that story. Now we’re tackling the comparison side too.
In this post, we’ll walk through a new way to compare variable values across snapshot hits directly inside Lightrun so you can reason about your application’s behavior without juggling terminals, editors, and scratchpads.
Why comparing snapshot hits matters
Lightrun Snapshots let you capture the full local state at a line of code in a running application, without redeploying and without polluting your logs. That’s powerful on its own, but most real debugging scenarios involve at least two data points:
- A “good” request vs. a “bad” one
- Behavior before and after a recent change
- A snapshot from staging versus one from production
- A single user’s failing flow compared to a baseline user
In each of these cases, you’re trying to answer a simple question:
“What’s different in the local state between these runs?”
Until now, that gap was often filled with copy-paste: copying JSON payloads into a note, flipping between windows, or dumping values into a spreadsheet.
The new comparison capabilities are designed to bring that whole process into Lightrun.
Two ways to compare snapshot data
The experience centers on two complementary workflows:
- Compare snapshot hit variable values in the clipboard
- Compare variables across snapshot hits
You can think of them as:
- Clipboard comparison → quickly eyeball differences between a handful of values
- Cross-hit comparison → systematically compare variables across multiple snapshot hits
1. Compare snapshot hit variable values in the clipboard
Sometimes you don’t need a full analysis. You just want to grab a few values from one or more snapshot hits and answer: “Are these the same or not?”
The clipboard comparison flow is built for that.
How it works
Capture your snapshot hits
Use Lightrun Snapshots as you already do: set conditions, trigger events, and let hits accumulate for the code location you care about.
Select the variables you care about
From a given snapshot hit, pick one or more variables (or nested fields) that are relevant to your investigation: IDs, flags, configuration values, response payload fields, and so on.
Send them to the comparison clipboard
Instead of copying these values into an external note, add them to Lightrun’s internal comparison clipboard. You can repeat this across different hits.

View differences side by side
In the clipboard comparison view, those values are lined up so you can visually spot what changed between hits.

Why it’s useful
- Less friction, less context switching
You no longer have to copy JSON into a text editor just to compare two fields. The comparison happens where you collected the data.
- Focus on what matters
You decide which variables matter for this bug. The clipboard is a curated set of values, not the entire snapshot payload.
- Great for quick hypotheses
If you suspect a single field (like featureFlagEnabled or tenantId) is driving the bug, clipboard comparison confirms or rules that out in a few seconds.
This feature is available across all IDEs in Lightrun 1.70+.
2. Compare variables across snapshot hits
For deeper investigations, you often need to look beyond a few fields. You want to treat multiple snapshot hits as rows in a table and compare many variables as columns.
That’s where cross snapshot-hit variable comparison comes in.
How it works
- Choose the snapshot hits to compare
From your list of hits, select the ones that represent the cases you care about, for example several failing hits, a couple of successful ones, or hits from different environments.
- Select variables across those hits
Pick which variables should be part of the comparison: function arguments, derived values, config objects, user context, and more.

- See a structured comparison view
Lightrun presents the selected variables across the selected hits in a structured format so you can quickly scan:
- Which values are identical across all hits
- Which values differ only on failing hits
- Which values change consistently between environments or releases

Why it’s powerful
- Turn snapshot hits into a mini dataset
Instead of skimming one hit at a time, you get a small table of runs vs. variables that surfaces patterns.
- Ideal for regression and environment mismatch debugging
If a bug appears only in production, you can compare production snapshot hits with their staging equivalents and immediately see configuration or input differences.
- Helps you confirm causality
When you see that a single variable is consistently flipped or diverging only in failing hits, you can move from “this might be related” to “this is very likely the cause.”
The cross-hit variable comparison feature is now available for the JetBrains IDE with Lightrun 1.72+.
Example scenarios where this shines
Here are a few real-world flows where these comparison features save time.
1. “It only fails for some users”
- Capture snapshots on the failing endpoint.
- Collect hits for users where the request fails and where it succeeds.
- Use cross-hit comparison to look at user context, feature flags, or tenant configs.
- Quickly see which flag or field differentiates the broken users from the healthy ones.
2. “It works in staging but not in production”
- Place a snapshot at the suspicious line of code.
- Trigger it in staging and in production with similar input.
- Compare environment variables, configuration objects, or external API responses across hits.
- Let the comparison view highlight which value diverges between environments.
3. “This started after the last release”
- Capture snapshot hits before and after your deploy (or across two snapshot configurations).
- Compare arguments, internal state, and derived values across versions.
- See what behavior changed, even if you never logged those values before.
Designed for developers, not dashboards
These comparison capabilities are built for a developer’s day-to-day workflow:
- Zero redeploys
Like all Lightrun Snapshots, you can add and remove these comparisons dynamically in a running system.
- No code pollution
You’re not adding temporary logs and TODOs just to inspect state. You’re using the snapshot data you already collect.
- Fast iteration
As your mental model of the bug evolves, you can add or remove variables from the comparison and focus on the ones that look most suspicious.
Getting started
If you’re already using Lightrun Snapshots:
- Capture a few snapshot hits on a line that’s involved in a known tricky issue.
- Send some variable values to the hits table to explore cross-hit comparisons.
- For a more in-depth investigation, copy a value from one snapshot to the clipboard and then compare with the same value in another.
If you’re new to Lightrun, this feature is a good illustration of what you gain from production-safe, on-the-fly observability: not just more data, but better tools to understand that data.
Wrap-up
Debugging is ultimately about understanding how and why your application behaves differently across requests, users, environments, and releases. Snapshots answer the “show me what’s happening right now” part. With variable comparison in the clipboard and across snapshot hits, Lightrun now helps you answer “what’s different between these runs?” just as effectively.
If you’re ready to stop copy-pasting JSON into notes just to compare two values, this feature is for you.
Want to see it in action?
Log into your Lightrun customer space and start comparing snapshot hits, or reach out to our team for a quick walkthrough tailored to your stack.
It’s Really not that Complicated.
You can actually understand what’s going on inside your live applications.