Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
API Changes Check Passed Great! The PR description now includes the required "### API changes" section. This helps reviewers and SDK users understand the impact of your changes. |
There was a problem hiding this comment.
Pull Request Overview
This PR introduces an adaptive FPS system that dynamically adjusts the target frame rate between 30-120 FPS based on real-time performance monitoring, replacing the previous fixed 60 FPS limit to better utilize high-performance devices.
- Implements adaptive FPS algorithm that monitors frame execution time and adjusts target FPS accordingly
- Adds API functions for getting current FPS, resetting the adaptive system, and manually setting target FPS
- Updates debug panel to display both current and target FPS for better visibility
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| packages/utils/src/lib/throttle.ts | Core adaptive FPS implementation with performance monitoring and automatic adjustment logic |
| packages/utils/src/index.ts | Exports new adaptive FPS API functions |
| packages/utils/api-report.api.md | API documentation updates for new internal functions |
| packages/tldraw/src/lib/ui/components/DefaultDebugPanel.tsx | Enhanced FPS display to show target FPS alongside current FPS |
|
at the moment TLSyncClient uses this to throttle socket messages to 60fps and I don't think we'd want to be sending messages any faster than that. Maybe as part of this refactor you could allow overriding the upper fps bound for some throttled operations? |
|
@mimecuvalo have you had a change to see how this works across mobile devices? @MitjaBezensek can probably weigh in more but i'm guessing that's the main area where we want to avoid pushing the browser to is max so we don't get throttled |
Good call out! Yeah, it doesn't maybe feel right that TLSyncClient should work at 60fps (at the moment). It does feel like either a) this uses a different, lodash-unmodified throttle function or b) we set an upper bound as you suggest — do you have a sense of a number that you'd suggest? Given network latency, I wouldn't expect this to be very high, I think 30fps is just fine for this - I know we use it for collaborative mouse cursors.
Good push. Had a chat with @MitjaBezensek about this offline, it's possible that a variant on this PR is to just suss out the max FPS and then let the browser go from there. So, we have a target FPS that we deduce, and then the browser can throttle the |
|
I'm trying to understand the benefits here. So:
|
yeah exactly.
this logic is adaptive in both directions (as opposed to the other variant PR). if slower operations start happening, the target FPS is lowered to match what the device can handle.
One thing worth mentioning here about this thought is that we currently have a fixed 60 FPS target — if this were true, it would also be affecting current devices that can't catch up. I think this is where the browser steps and calls the RAF at a lower FPS if the workload is high. It's a target, so the browser isn't necessarily forced to work faster just b/c we change the target.
There's an easy way to test this, just set all the targets to 1000 🙃 And it's still works just fine.
I think the good fix about Steve's PR (#6409) is that if the browser is having a hard time catching up, when it finally does, the flush will happen. If they keep piling up and getting delayed, the target FPS will be more wishful thinking and just basically happen when the RAF is finally called. The adaptive piece of this tries to more align with what the browser is actually able to handle at the moment, work-load wise. So I think for slower devices it just brings the targets more in line, shouldn't affect perf by "overclocking" or trying to make it run faster. But the benefit then is strictly for faster devices that are detected to be able to run faster. |
But browser already does that, what's the benefit of us doing it on top of that? We base this whole logic on the rate that browser calls us. So browser is already doing this work, no?
I can also set the target to 1000 on main and it works. Where's the benefit of this diff (show me a use case where us adapting the frame rate works better than just us relying on browser doing that)? |
Think we can achieve that just by increasing the target frame rate in a separate raf, which would be much simpler? |
Yeah, again, I think it was a good push, which is why I created the variant PR based on your feedback: #6497
We might be going in circles 🙃 The benefit is for the faster FPS, right? Where 60fps is slower than what the machine can handle (like my M1 mac). But hey, we could also just say, let the browser handle it all maybe?
I think I know what you mean but I'm not sure? Do you wanna create a variant PR to show what you mean? |
Something like this https://github.com/tldraw/tldraw/pull/6500/files |
btw, you said "Think we can achieve that just by increasing the target frame rate in a separate raf," but that's what this PR and the maximum variant both do, right? The measuring is happening in a separate RAF cycle. your PR i think might cause a separate issue potentially:
|
Your PR also down regulates the fps, which my PR does not do. And this is the part that I have been questioning all along.
I only wanted to show you what I had mind, it's in no way production ready, just trying to showcase the idea of "what if we only increase the fps on devices that have support for it". We can achieve that in many ways and my PR is just a quick example of that. |
variant on #6470 (this is based on that PR) @MitjaBezensek wdyt? ### Change type - [ ] `bugfix` - [x] `improvement` - [ ] `feature` - [ ] `api` - [ ] `other`
|
@MitjaBezensek / @ds300 any remaining feedback? |
|
|
||
| // Start measuring actual browser frame timing | ||
| if (!isTest()) { | ||
| startFrameTimingMeasurement() |
There was a problem hiding this comment.
Should we maybe do this outside of flush? This is a really hot code path so might be worth it to avoid this call which early returns all the time?
|
|
||
| // Only update target FPS if user hasn't manually set it | ||
| if (!isManuallySet) { | ||
| targetFps = maxDetectedFps |
There was a problem hiding this comment.
Might be worth stopping frame measurements if we already hit 120 fps?
There was a problem hiding this comment.
Actually, now I'm wondering if 120 should be the max or not 😆
There are higher high-end gaming monitors that have 240hz... maybe we should allow even higher??
There was a problem hiding this comment.
Not sure we can reliably process things at that rate 😁
There was a problem hiding this comment.
This looks like it lowers the performance somehow? Here's me resizing and translating 400 shapes.
Was testing with yarn dev-app. So might be due to the TLSyncClient changes?
Main
CleanShot.2025-07-28.at.13.09.41.mp4
CleanShot.2025-07-28.at.13.18.49.mp4
This PR
CleanShot.2025-07-28.at.13.10.57.mp4
CleanShot.2025-07-28.at.13.17.38.mp4
Huh, lemme take a look, it's weird b/c the target is still 60 in your videos, so it shouldn't be much of a change? Curious... |
I reverted the last changes to I've been exploring how I could modify TLSyncClient. One option is that when you're solo in a room that the updates don't have to happen quite as often as 60fps(!), another option is having some kind of backoff if we're starting to send/receive a lot of operations over the wire and our fpsQueue is starting to fall behind. Def. worth a discussion to think about this some more. |
|
Oh, the other thing is possibly being able to designate something on the Just spoke with @ds300 offline about the TLSyncClient and adding a "solo mode". Basically to:
|
|
I marked this PR as Basically now there are two queues, one that can have a slower FPS if needed (like the network syncing), and one that can have a the more performant FPS. |
| } | ||
|
|
||
| // Add new item to timing-based queue | ||
| timingBasedQueue.push({ fn, getTargetFps, lastRunTime: Date.now() }) |
There was a problem hiding this comment.
Bug: Throttled Functions Delayed on First Call
Functions added to the timing-based queue via fpsThrottle (with getTargetFps) are initialized with lastRunTime: Date.now(). This prevents them from executing immediately on their first call. The flushTimingBasedQueue logic, which checks elapsed >= targetTimePerFrame, will find elapsed near zero for new items, delaying their initial execution by at least one throttle cycle. This deviates from the expected behavior of throttled functions, which typically run immediately on their first invocation.
Locations (1)
| this.socket = config.socket | ||
| this.onAfterConnect = config.onAfterConnect | ||
| // TODO TODO TODO | ||
| this.isSoloMode = false |
There was a problem hiding this comment.
Bug: Incomplete Solo Mode Code Committed
Unfinished 'solo mode' development code was accidentally committed. This is evidenced by "TODO TODO TODO" comments, a redundant this.isSoloMode = false assignment in the constructor, and the isSoloMode property always being false despite its use in fpsThrottle logic.
Locations (1)
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| 🔵 In progress View logs |
multiplayer-template | 7e366d9 | Oct 01 2025, 12:48 PM |
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| 🔵 In progress View logs |
chat-template | 7e366d9 | Oct 01 2025, 12:48 PM |
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
multiplayer-template | 7e366d9 | Oct 01 2025, 12:55 PM |
|
🤖 Claude Code Review OverviewThis PR introduces adaptive FPS detection to unlock higher frame rates (up to 120fps) on capable devices, moving beyond the hardcoded 60fps limit. The implementation is comprehensive and well-thought-out, but there are several concerns that should be addressed before merging. 🔴 Critical Issues1. Potential Memory Leak in Frame Timing Measurement (packages/utils/src/lib/throttle.ts:68-120)The Issue: Lines 143-145 and 175-177 call Recommendation: Add cleanup of existing const startFrameTimingMeasurement = () => {
if (isFrameTimingActive || isTest()) return
// Clean up any lingering rAF before starting new measurement
if (frameTimingRaf) {
cancelAnimationFrame(frameTimingRaf)
}
isFrameTimingActive = true
// ... rest of function
}2. Breaking Behavior Change in fpsThrottle() Signature (packages/utils/src/lib/throttle.ts:213-273)The
Issue: This is a significant behavioral change that could break existing code expectations. The TLSyncClient changes (packages/sync-core/src/lib/TLSyncClient.ts:565-586) now use the custom FPS path, meaning Questions:
3. Race Condition in Dual Queue System (packages/utils/src/lib/throttle.ts:180-191)The
Issue: If Recommendation: Set
|
|
closing in favor of #6868! |
…7418) once more with feeling. the previous attempts were trying to put schedule UI and network events on the same queue, which ... I don't know what we were thinking :P Anyway, this creates a proper FpsScheduler class that can take different target rates which solves the issues we were seeing. But also, it solves the fact that even without the 120fps change, we shouldn't be combining these queues. I recommend reviewing this PR with "hide whitespace" on. you'll note that it had less changes than you would expect. it's really more about just creating a JS class to encapsulate the throttle queue. this lays the groundwork for the child PR here that does the network bit of this: #7657 previous PRs: #6868 to #6470 ### Change type - [ ] `bugfix` - [x] `improvement` - [ ] `feature` - [ ] `api` - [ ] `other` ### Test plan - [x] Unit tests (if present) - [ ] End to end tests (if present) ### Release notes - Improved performance by separating UI and network scheduling queues. ### API changes - adds `FpsScheduler` to be able to create a FPS-throttled queue of functions to execute <!-- CURSOR_SUMMARY --> --- > [!NOTE] > **Introduce per-instance FPS scheduling** > > - Add `FpsScheduler` class in `lib/throttle` with its own queue/state and configurable target FPS; create a default 120fps instance and have `fpsThrottle`/`throttleToNextFrame` delegate to it > - Export `FpsScheduler` from `utils` (`src/index.ts`); update API report accordingly > - Add comprehensive unit tests (`lib/throttle.test.ts`) covering throttling, next-frame batching, cancelation, and real-world scenarios > - UI tweak: `DefaultDebugPanel` FPS readout now includes `maxKnownFps` (`FPS ${fps} (max: ${maxKnownFps})`) > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 871ad95. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY -->

The throttle PR (#6409) reminded me of a wee conversation we had when the performance improvements went out last year (#2977 (comment)) about how setting the FPS to be a constant 60 sort of short-changed the faster machines that could achieve 120fps.
When it hits 120 fps on my machine it starts feeling butter smooooth.
Change type
improvementTest plan
Release notes
API changes