Skip to content

perf: detect target maximum FPS#6470

Closed
mimecuvalo wants to merge 17 commits intomainfrom
mime/adaptive-fps
Closed

perf: detect target maximum FPS#6470
mimecuvalo wants to merge 17 commits intomainfrom
mime/adaptive-fps

Conversation

@mimecuvalo
Copy link
Copy Markdown
Member

@mimecuvalo mimecuvalo commented Jul 17, 2025

The throttle PR (#6409) reminded me of a wee conversation we had when the performance improvements went out last year (#2977 (comment)) about how setting the FPS to be a constant 60 sort of short-changed the faster machines that could achieve 120fps.

When it hits 120 fps on my machine it starts feeling butter smooooth.

Change type

  • improvement

Test plan

  • Unit tests
  • End to end tests

Release notes

  • perf: detect target maximum FPS

API changes

  • adds internal functions for getting/setting target FPS

@vercel
Copy link
Copy Markdown

vercel bot commented Jul 17, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Updated (UTC)
analytics Ready Ready Preview Oct 1, 2025 0:57am
chat-template Ready Ready Preview Oct 1, 2025 0:57am
examples Ready Ready Preview Oct 1, 2025 0:57am
tldraw-docs Ready Ready Preview Oct 1, 2025 0:57am
workflow-template Ready Ready Preview Oct 1, 2025 0:57am

@huppy-bot huppy-bot bot added the improvement Product improvement label Jul 17, 2025
@huppy-bot
Copy link
Copy Markdown
Contributor

huppy-bot bot commented Jul 17, 2025

API Changes Check Passed

Great! The PR description now includes the required "### API changes" section. This helps reviewers and SDK users understand the impact of your changes.

@steveruizok steveruizok requested a review from Copilot July 17, 2025 12:31
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces an adaptive FPS system that dynamically adjusts the target frame rate between 30-120 FPS based on real-time performance monitoring, replacing the previous fixed 60 FPS limit to better utilize high-performance devices.

  • Implements adaptive FPS algorithm that monitors frame execution time and adjusts target FPS accordingly
  • Adds API functions for getting current FPS, resetting the adaptive system, and manually setting target FPS
  • Updates debug panel to display both current and target FPS for better visibility

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.

File Description
packages/utils/src/lib/throttle.ts Core adaptive FPS implementation with performance monitoring and automatic adjustment logic
packages/utils/src/index.ts Exports new adaptive FPS API functions
packages/utils/api-report.api.md API documentation updates for new internal functions
packages/tldraw/src/lib/ui/components/DefaultDebugPanel.tsx Enhanced FPS display to show target FPS alongside current FPS

cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

@ds300
Copy link
Copy Markdown
Collaborator

ds300 commented Jul 22, 2025

at the moment TLSyncClient uses this to throttle socket messages to 60fps and I don't think we'd want to be sending messages any faster than that. Maybe as part of this refactor you could allow overriding the upper fps bound for some throttled operations?

@SomeHats
Copy link
Copy Markdown
Contributor

@mimecuvalo have you had a change to see how this works across mobile devices? @MitjaBezensek can probably weigh in more but i'm guessing that's the main area where we want to avoid pushing the browser to is max so we don't get throttled

@mimecuvalo
Copy link
Copy Markdown
Member Author

at the moment TLSyncClient uses this to throttle socket messages to 60fps and I don't think we'd want to be sending messages any faster than that. Maybe as part of this refactor you could allow overriding the upper fps bound for some throttled operations?

Good call out! Yeah, it doesn't maybe feel right that TLSyncClient should work at 60fps (at the moment). It does feel like either a) this uses a different, lodash-unmodified throttle function or b) we set an upper bound as you suggest — do you have a sense of a number that you'd suggest? Given network latency, I wouldn't expect this to be very high, I think 30fps is just fine for this - I know we use it for collaborative mouse cursors.

@mimecuvalo have you had a change to see how this works across mobile devices? @MitjaBezensek can probably weigh in more but i'm guessing that's the main area where we want to avoid pushing the browser to is max so we don't get throttled

Good push. Had a chat with @MitjaBezensek about this offline, it's possible that a variant on this PR is to just suss out the max FPS and then let the browser go from there. So, we have a target FPS that we deduce, and then the browser can throttle the requestAnimationFrame as needed, which it does anyway — this instead of us trying to change the target on the fly.

@MitjaBezensek
Copy link
Copy Markdown
Contributor

I'm trying to understand the benefits here. So:

  1. We will run at higher frame rates on devices that can handle it. That's definitely great.
  2. What happens for the devices that can't handle it though? Is adding this logic better than just relying on the browser doing the throttling? Did you test it out?
    I'd really want to make sure that the additional logic does not make things worse for lower end devices. I could definitely see that doubling the rate at which we do things cause even more pressure on these devices and cause them to throttle sooner. Will this logic kick in and make it better? I'd definitely like us to answer these questions before proceeding. Especially since it does introduce quite some complexity to already complex part of the codebase.

@mimecuvalo
Copy link
Copy Markdown
Member Author

We will run at higher frame rates on devices that can handle it. That's definitely great.

yeah exactly.

Will this logic kick in and make it better? I'd definitely like us to answer these questions before proceeding. Especially since it does introduce quite some complexity to already complex part of the codebase.

this logic is adaptive in both directions (as opposed to the other variant PR). if slower operations start happening, the target FPS is lowered to match what the device can handle.

What happens for the devices that can't handle it though?

One thing worth mentioning here about this thought is that we currently have a fixed 60 FPS target — if this were true, it would also be affecting current devices that can't catch up. I think this is where the browser steps and calls the RAF at a lower FPS if the workload is high. It's a target, so the browser isn't necessarily forced to work faster just b/c we change the target.

Did you test it out?

There's an easy way to test this, just set all the targets to 1000 🙃 And it's still works just fine.

Screenshot 2025-07-24 at 14 22 22

Is adding this logic better than just relying on the browser doing the throttling?

I think the good fix about Steve's PR (#6409) is that if the browser is having a hard time catching up, when it finally does, the flush will happen. If they keep piling up and getting delayed, the target FPS will be more wishful thinking and just basically happen when the RAF is finally called. The adaptive piece of this tries to more align with what the browser is actually able to handle at the moment, work-load wise. So I think for slower devices it just brings the targets more in line, shouldn't affect perf by "overclocking" or trying to make it run faster. But the benefit then is strictly for faster devices that are detected to be able to run faster.

@MitjaBezensek
Copy link
Copy Markdown
Contributor

MitjaBezensek commented Jul 24, 2025

Will this logic kick in and make it better? I'd definitely like us to answer these questions before proceeding. Especially since it does introduce quite some complexity to already complex part of the codebase.

this logic is adaptive in both directions (as opposed to the other variant PR). if slower operations start happening, the target FPS is lowered to match what the device can handle.

But browser already does that, what's the benefit of us doing it on top of that? We base this whole logic on the rate that browser calls us. So browser is already doing this work, no?

Did you test it out?

There's an easy way to test this, just set all the targets to 1000 🙃 And it's still works just fine.

I can also set the target to 1000 on main and it works. Where's the benefit of this diff (show me a use case where us adapting the frame rate works better than just us relying on browser doing that)?

@MitjaBezensek
Copy link
Copy Markdown
Contributor

MitjaBezensek commented Jul 24, 2025

But the benefit then is strictly for faster devices that are detected to be able to run faster.

Think we can achieve that just by increasing the target frame rate in a separate raf, which would be much simpler?

@mimecuvalo
Copy link
Copy Markdown
Member Author

But browser already does that, what's the benefit of us doing it on top of that? We base this whole logic on the rate that browser calls us. So browser is already doing this work, no?

Yeah, again, I think it was a good push, which is why I created the variant PR based on your feedback: #6497

I can also set the target to 1000 on main and it works. Where's the benefit of this diff (show me a use case where us adapting the frame rate works better than just us relying on browser doing that)?

We might be going in circles 🙃 The benefit is for the faster FPS, right? Where 60fps is slower than what the machine can handle (like my M1 mac). But hey, we could also just say, let the browser handle it all maybe?

Think we can achieve that just by increasing the target frame rate in a separate raf, which would be much simpler?

I think I know what you mean but I'm not sure? Do you wanna create a variant PR to show what you mean?

@MitjaBezensek
Copy link
Copy Markdown
Contributor

I think I know what you mean but I'm not sure? Do you wanna create a variant PR to show what you mean?

Something like this https://github.com/tldraw/tldraw/pull/6500/files
We only do the fps check 60 times, but we could keep it running indefinitely, just like in this pr. Not sure if it's needed though 🤷‍♂️

@mimecuvalo
Copy link
Copy Markdown
Member Author

I think I know what you mean but I'm not sure? Do you wanna create a variant PR to show what you mean?

Something like this https://github.com/tldraw/tldraw/pull/6500/files We only do the fps check 60 times, but we could keep it running indefinitely, just like in this pr. Not sure if it's needed though 🤷‍♂️

btw, you said "Think we can achieve that just by increasing the target frame rate in a separate raf," but that's what this PR and the maximum variant both do, right? The measuring is happening in a separate RAF cycle.

your PR i think might cause a separate issue potentially:

  • since it runs as a side effect on module load, and since tldraw might be part of a heavier page that's loading, a 500ms wait time to start measuring might measure a heavy initialization sequence and then we won't get the right measurement
  • the max checks idea is interesting and could be applied to this existing PR, but, again i would worry about measuring at some wrong moment where we don't get the right reading.

@MitjaBezensek
Copy link
Copy Markdown
Contributor

btw, you said "Think we can achieve that just by increasing the target frame rate in a separate raf," but that's what this PR and the maximum variant both do, right? The measuring is happening in a separate RAF cycle.

Your PR also down regulates the fps, which my PR does not do. And this is the part that I have been questioning all along.

your PR i think might cause a separate issue potentially:

  • since it runs as a side effect on module load, and since tldraw might be part of a heavier page that's loading, a 500ms wait time to start measuring might measure a heavy initialization sequence and then we won't get the right measurement
  • the max checks idea is interesting and could be applied to this existing PR, but, again i would worry about measuring at some wrong moment where we don't get the right reading.

I only wanted to show you what I had mind, it's in no way production ready, just trying to showcase the idea of "what if we only increase the fps on devices that have support for it". We can achieve that in many ways and my PR is just a quick example of that.

variant on #6470 (this is based on
that PR)
@MitjaBezensek wdyt?

### Change type

- [ ] `bugfix`
- [x] `improvement`
- [ ] `feature`
- [ ] `api`
- [ ] `other`
@mimecuvalo mimecuvalo changed the title perf: adaptive FPS perf: detect target maximum FPS Jul 25, 2025
@mimecuvalo
Copy link
Copy Markdown
Member Author

@MitjaBezensek / @ds300 any remaining feedback?


// Start measuring actual browser frame timing
if (!isTest()) {
startFrameTimingMeasurement()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we maybe do this outside of flush? This is a really hot code path so might be worth it to avoid this call which early returns all the time?


// Only update target FPS if user hasn't manually set it
if (!isManuallySet) {
targetFps = maxDetectedFps
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth stopping frame measurements if we already hit 120 fps?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, now I'm wondering if 120 should be the max or not 😆
There are higher high-end gaming monitors that have 240hz... maybe we should allow even higher??

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure we can reliably process things at that rate 😁

@MitjaBezensek MitjaBezensek self-requested a review July 28, 2025 11:11
Copy link
Copy Markdown
Contributor

@MitjaBezensek MitjaBezensek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like it lowers the performance somehow? Here's me resizing and translating 400 shapes.

Was testing with yarn dev-app. So might be due to the TLSyncClient changes?

Main

CleanShot.2025-07-28.at.13.09.41.mp4
CleanShot.2025-07-28.at.13.18.49.mp4

This PR

CleanShot.2025-07-28.at.13.10.57.mp4
CleanShot.2025-07-28.at.13.17.38.mp4

@mimecuvalo
Copy link
Copy Markdown
Member Author

This looks like it lowers the performance somehow? Here's me resizing and translating 400 shapes.

Was testing with yarn dev-app. So might be due to the TLSyncClient changes?

Huh, lemme take a look, it's weird b/c the target is still 60 in your videos, so it shouldn't be much of a change? Curious...

@mimecuvalo
Copy link
Copy Markdown
Member Author

This looks like it lowers the performance somehow? Here's me resizing and translating 400 shapes.

Was testing with yarn dev-app. So might be due to the TLSyncClient changes?

I reverted the last changes to TLSyncClient (that removed fpsThrottle) - those last changes indeed did make it worse.
It basically ends up worse that there's a separate queue of operations happening vs. the TLSyncClient queue being tied in specifically to our main FPS queue — this was an interesting learning!

I've been exploring how I could modify TLSyncClient. One option is that when you're solo in a room that the updates don't have to happen quite as often as 60fps(!), another option is having some kind of backoff if we're starting to send/receive a lot of operations over the wire and our fpsQueue is starting to fall behind. Def. worth a discussion to think about this some more.

@mimecuvalo
Copy link
Copy Markdown
Member Author

Oh, the other thing is possibly being able to designate something on the fpsQueue as having a lower FPS, something worth considering as well.

Just spoke with @ds300 offline about the TLSyncClient and adding a "solo mode". Basically to:

  • not send up mouse cursor and presence state to the server when in a room by yourself
  • lower the FPS (even to once a second) when you're just in a room by yourself

@mimecuvalo mimecuvalo changed the title perf: detect target maximum FPS [wip] perf: detect target maximum FPS Jul 31, 2025
@mimecuvalo
Copy link
Copy Markdown
Member Author

I marked this PR as [wip] and made some significant changes in advance of @ds300's work for a isSolodMode.

Basically now there are two queues, one that can have a slower FPS if needed (like the network syncing), and one that can have a the more performant FPS.

}

// Add new item to timing-based queue
timingBasedQueue.push({ fn, getTargetFps, lastRunTime: Date.now() })
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Throttled Functions Delayed on First Call

Functions added to the timing-based queue via fpsThrottle (with getTargetFps) are initialized with lastRunTime: Date.now(). This prevents them from executing immediately on their first call. The flushTimingBasedQueue logic, which checks elapsed >= targetTimePerFrame, will find elapsed near zero for new items, delaying their initial execution by at least one throttle cycle. This deviates from the expected behavior of throttled functions, which typically run immediately on their first invocation.

Locations (1)
Fix in Cursor Fix in Web

this.socket = config.socket
this.onAfterConnect = config.onAfterConnect
// TODO TODO TODO
this.isSoloMode = false
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Incomplete Solo Mode Code Committed

Unfinished 'solo mode' development code was accidentally committed. This is evidenced by "TODO TODO TODO" comments, a redundant this.isSoloMode = false assignment in the constructor, and the isSoloMode property always being false despite its use in fpsThrottle logic.

Locations (1)
Fix in Cursor Fix in Web

@mimecuvalo mimecuvalo mentioned this pull request Sep 30, 2025
1 task
@cloudflare-workers-and-pages
Copy link
Copy Markdown

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Updated (UTC)
🔵 In progress
View logs
multiplayer-template 7e366d9 Oct 01 2025, 12:48 PM

@cloudflare-workers-and-pages
Copy link
Copy Markdown

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Updated (UTC)
🔵 In progress
View logs
chat-template 7e366d9 Oct 01 2025, 12:48 PM

@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Oct 1, 2025

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Updated (UTC)
✅ Deployment successful!
View logs
multiplayer-template 7e366d9 Oct 01 2025, 12:55 PM

@claude
Copy link
Copy Markdown

claude bot commented Oct 1, 2025

🤖 Claude Code Review

Overview

This PR introduces adaptive FPS detection to unlock higher frame rates (up to 120fps) on capable devices, moving beyond the hardcoded 60fps limit. The implementation is comprehensive and well-thought-out, but there are several concerns that should be addressed before merging.

🔴 Critical Issues

1. Potential Memory Leak in Frame Timing Measurement (packages/utils/src/lib/throttle.ts:68-120)

The startFrameTimingMeasurement() function can be called multiple times before the previous measurement completes, but the check if (isFrameTimingActive || isTest()) return only prevents starting a new measurement loop—it doesn't cancel the old frameTimingRaf. If isFrameTimingActive gets set to false elsewhere while a rAF is still pending, a new measurement could start without canceling the old one.

Issue: Lines 143-145 and 175-177 call startFrameTimingMeasurement() which could theoretically race with the cleanup logic.

Recommendation: Add cleanup of existing frameTimingRaf at the start of startFrameTimingMeasurement():

const startFrameTimingMeasurement = () => {
	if (isFrameTimingActive || isTest()) return
	
	// Clean up any lingering rAF before starting new measurement
	if (frameTimingRaf) {
		cancelAnimationFrame(frameTimingRaf)
	}
	
	isFrameTimingActive = true
	// ... rest of function
}

2. Breaking Behavior Change in fpsThrottle() Signature (packages/utils/src/lib/throttle.ts:213-273)

The getTargetFps parameter fundamentally changes the throttling behavior:

  • Without getTargetFps: Functions execute in FIFO order
  • With getTargetFps: Functions may execute out of order

Issue: This is a significant behavioral change that could break existing code expectations. The TLSyncClient changes (packages/sync-core/src/lib/TLSyncClient.ts:565-586) now use the custom FPS path, meaning flushPendingPushRequests and scheduleRebase could execute out of order relative to other throttled operations.

Questions:

  1. Is out-of-order execution safe for sync operations?
  2. Should this behavior difference be more prominently documented?
  3. Consider if these operations need ordering guarantees

3. Race Condition in Dual Queue System (packages/utils/src/lib/throttle.ts:180-191)

The flush() function processes both queues sequentially, but scheduleFlush() can be called from multiple places:

  • Line 190 (end of flush if queues not empty)
  • Line 243 (adding to timing-based queue)
  • Line 261 (adding to ordered queue)

Issue: If flushOrderedQueue() takes significant time and another scheduleFlush() is called, there's potential for overlapping rAF callbacks since frameRaf is set to undefined inside the rAF callback (line 198) but the flush is still executing.

Recommendation: Set frameRaf = undefined before calling flush() or add additional guards.

⚠️ Moderate Issues

4. Inconsistent Timing Sources (packages/utils/src/lib/throttle.ts)

The code mixes Date.now() (lines 131, 150, 242) for throttling decisions with performance.now() (lines 73, 81, 95, 105) for FPS measurement.

Issue: While not strictly a bug, mixing timing sources can cause subtle issues:

  • Date.now() can jump due to system clock changes
  • performance.now() is monotonic and more precise

Recommendation: Use performance.now() consistently for all timing, or document why different sources are used.

5. Magic Numbers Without Constants

  • Line 25: 0.9 multiplier for variance tolerance
  • Line 87: FPS calculation formula could be simplified
  • Lines 41, 57, 98: Hardcoded array length checks (3, 5)

Recommendation: Extract magic numbers to named constants:

const TIMING_VARIANCE_TOLERANCE = 0.9
const MIN_MEASUREMENTS_FOR_UPDATE = 3
const MAX_MEASUREMENTS_TO_KEEP = 5

6. FPS Calculation Complexity (packages/utils/src/lib/throttle.ts:86-88)

const measuredFps = Math.round(
	framesInCurrentWindow * (FPS_MEASUREMENT_WINDOW / elapsed) * (1000 / FPS_MEASUREMENT_WINDOW)
)

This simplifies to: framesInCurrentWindow * 1000 / elapsed

The (FPS_MEASUREMENT_WINDOW / elapsed) and (1000 / FPS_MEASUREMENT_WINDOW) cancel out.

7. Module-Level Mutable State (packages/utils/src/lib/throttle.ts:7-38)

All state is stored in module-level variables, making testing difficult and preventing multiple independent throttle systems.

Impact:

  • Cannot easily test different FPS scenarios in parallel
  • Cannot reset state between tests without resetAdaptiveFps()
  • Global state could cause issues in SSR or multi-tenant scenarios

Recommendation: Consider encapsulating state in a class or function closure if this becomes problematic.

🟡 Minor Issues

8. Missing Test Coverage

No test file for throttle.ts was found in the repository. Given the complexity of this change:

Recommendation: Add packages/utils/src/lib/throttle.test.ts covering:

  • Adaptive FPS detection behavior
  • Dual queue ordering guarantees
  • Custom FPS timing behavior
  • Edge cases (empty queues, rapid calls, cancellation)
  • Memory leak prevention (rAF cleanup)

9. API Surface Concerns (packages/utils/api-report.api.md:133-137)

The new signature fpsThrottle(fn: {(): void; cancel?(): void}, getTargetFps?: () => number) exposes internal complexity.

Question: Should getCurrentFps(), setTargetFps(), and resetAdaptiveFps() be @internal or @public? They're currently marked @internal but exposed in the API report.

10. DefaultDebugPanel Display (packages/tldraw/src/lib/ui/components/DefaultDebugPanel.tsx:114)

The debug panel now shows: FPS ${fps.toString()} (target: ${getCurrentFps()})

Suggestion: Consider also showing maxDetectedFps to help users understand if their device was detected correctly:

`FPS ${fps} / ${getCurrentFps()} (max: ${maxDetectedFps})`

✅ Positive Aspects

  1. Well-documented code with clear comments explaining design decisions
  2. Gradual adaptation prevents sudden FPS changes that could cause jank
  3. Proper test environment handling with isTest() checks
  4. Smart optimization for solo mode in TLSyncClient (1fps vs 30fps)
  5. Backward compatible for existing fpsThrottle() calls without parameters
  6. Good performance monitoring with windowed FPS measurements

🔒 Security Considerations

No security concerns identified. The code doesn't handle user input or sensitive data.

🎯 Performance Considerations

Positive:

  • Unlocking 120fps will provide noticeably smoother experience on capable hardware
  • Solo mode optimization (1fps) reduces unnecessary work when collaboration inactive
  • Measurement windows are appropriately sized to avoid excessive overhead

Concerns:

  • Additional rAF loop for FPS measurement adds overhead (though minimal)
  • Two separate queues with different iteration patterns add complexity
  • Consider impact on battery life with sustained 120fps rendering

📋 Recommendations Summary

Before merging:

  1. ✅ Add cleanup guard in startFrameTimingMeasurement()
  2. ✅ Verify out-of-order execution is safe for TLSyncClient operations
  3. ✅ Add comprehensive test coverage
  4. ✅ Review race condition in dual-queue flush logic
  5. ⚠️ Consider using performance.now() consistently

Nice to have:
6. Extract magic numbers to named constants
7. Simplify FPS calculation formula
8. Add maxDetectedFps to debug panel display
9. Document behavioral differences more prominently

Overall Assessment

This is a well-designed feature that will meaningfully improve UX on high-refresh-rate displays. The adaptive detection approach is smart, and the code is generally well-structured. However, the critical issues around potential memory leaks, behavioral changes, and race conditions should be addressed before merging, and test coverage is essential given the complexity of the timing logic.

Recommendation: Request changes to address critical issues, particularly around rAF cleanup and test coverage.

@mimecuvalo
Copy link
Copy Markdown
Member Author

closing in favor of #6868!

@mimecuvalo mimecuvalo closed this Oct 3, 2025
@steveruizok steveruizok changed the title [wip] perf: detect target maximum FPS perf: detect target maximum FPS Jan 2, 2026
@steveruizok steveruizok added the sdk Affects the tldraw sdk label Jan 2, 2026
github-merge-queue bot pushed a commit that referenced this pull request Jan 14, 2026
…7418)

once more with feeling. the previous attempts were trying to put
schedule UI and network events on the same queue, which ... I don't know
what we were thinking :P
Anyway, this creates a proper FpsScheduler class that can take different
target rates which solves the issues we were seeing. But also, it solves
the fact that even without the 120fps change, we shouldn't be combining
these queues.

I recommend reviewing this PR with "hide whitespace" on. you'll note
that it had less changes than you would expect. it's really more about
just creating a JS class to encapsulate the throttle queue.

this lays the groundwork for the child PR here that does the network bit
of this: #7657

previous PRs: #6868 to
#6470

### Change type

- [ ] `bugfix`
- [x] `improvement`
- [ ] `feature`
- [ ] `api`
- [ ] `other`

### Test plan

- [x] Unit tests (if present)
- [ ] End to end tests (if present)

### Release notes

- Improved performance by separating UI and network scheduling queues.

### API changes

- adds `FpsScheduler` to be able to create a FPS-throttled queue of
functions to execute



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Introduce per-instance FPS scheduling**
> 
> - Add `FpsScheduler` class in `lib/throttle` with its own queue/state
and configurable target FPS; create a default 120fps instance and have
`fpsThrottle`/`throttleToNextFrame` delegate to it
> - Export `FpsScheduler` from `utils` (`src/index.ts`); update API
report accordingly
> - Add comprehensive unit tests (`lib/throttle.test.ts`) covering
throttling, next-frame batching, cancelation, and real-world scenarios
> - UI tweak: `DefaultDebugPanel` FPS readout now includes `maxKnownFps`
(`FPS ${fps} (max: ${maxKnownFps})`)
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
871ad95. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Product improvement sdk Affects the tldraw sdk

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants