perf(editor): reduce allocations and hash computation overhead#7840
perf(editor): reduce allocations and hash computation overhead#7840steveruizok merged 7 commits intomainfrom
Conversation
Closes #7833. Previously, once the number of shapes grew to 256, we were creating new hashes for all shapes on every frame. ## Summary - Cache all strings regardless of length (removed 16-char minimum threshold) - Remove cache eviction logic (no more 255 entry limit and reset) - Simplify to unbounded cache since shape IDs are finite per document The previous implementation avoided caching short strings and had an eviction policy. However, in tldraw's use case, the number of unique string keys is bounded by the document's shape/record count, so an unbounded cache is safe and reduces overhead. ## Test plan - [x] Typecheck passes - [ ] Manual testing with large documents 🤖 Generated with [Claude Code](https://claude.ai/claude-code) <!-- CURSOR_SUMMARY --> --- > [!NOTE] > **Medium Risk** > Changes the hashing hot-path used by `ImmutableMap` to always cache string hashes and removes cache eviction, which could increase memory use or alter performance characteristics on workloads with many unique strings. > > **Overview** > **Simplifies string hash caching in `ImmutableMap`** by always routing string hashing through `cachedHashString` (removing the short-string threshold). > > Removes the cache size/eviction logic and related constants, making `stringHashCache` an unbounded in-memory map that retains all hashed strings for the process lifetime. > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit e3fefe0. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY --> Co-authored-by: Claude Opus 4.5 <[email protected]>
…7837) ## Summary - Add fast path when all shapes are visible (return cached empty set) - On first run, compute from scratch - On subsequent runs, check if result differs before creating new Set - Only allocate new Set when contents actually changed This reduces GC pressure by avoiding unnecessary Set allocations on every frame. ## Test plan - [x] Typecheck passes - [ ] Manual testing with many shapes on canvas 🤖 Generated with [Claude Code](https://claude.ai/claude-code) <!-- CURSOR_SUMMARY --> --- > [!NOTE] > **Low Risk** > Low risk performance refactor limited to `notVisibleShapes` derivation; main risk is subtle cache/identity behavior affecting downstream consumers if the computed value no longer updates when expected. > > **Overview** > Optimizes `notVisibleShapes` to reduce GC pressure by **reusing Set instances** instead of allocating a new `Set` every recompute. > > Adds a fast path that returns a cached empty `Set` when all shapes are visible, computes from scratch on the first run, and on subsequent runs scans to detect changes before building a new `Set` (returning `prevValue` when contents are unchanged). > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit e74207f. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY --> Co-authored-by: Claude Opus 4.5 <[email protected]>
…7839) In order to fix a major performance bottleneck in large multiplayer rooms, this PR replaces expensive lodash deep equality checks with custom equality functions optimized for Sets. lodash's `isEqual` doesn't handle Sets efficiently - it likely converts them to arrays for deep comparison. With thousands of shapes in large multiplayer rooms, this was causing significant performance overhead on every frame during the equality checks in computed signals. ### Change type - [x] `improvement` ### Test plan 1. Open a large multiplayer room with many shapes 2. Check flame graph - `isEqual` should no longer appear in hot paths - [ ] Unit tests - [ ] End to end tests ### Release notes - Improve performance in large multiplayer rooms by replacing lodash deep equality with optimized Set comparisons Co-authored-by: Claude Opus 4.5 <[email protected]>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
5 Skipped Deployments
|
| if (!b.has(item)) return false | ||
| } | ||
| return true | ||
| } |
There was a problem hiding this comment.
Duplicated setsEqual function across two files
Medium Severity
The setsEqual function is defined identically in both usePeerIds.ts and CanvasShapeIndicators.tsx. This duplication increases maintenance burden—bug fixes or optimizations must be applied in both places. Consider extracting this to a shared utility module.
## Summary - Creates a `ShapeCullingProvider` React context that maintains a registry of shape container refs - Replaces per-shape `useQuickReactor` subscriptions with a single centralized reactor in `ShapesToDisplay` - Each Shape component registers its container refs on mount via the context and unregisters on unmount - The centralized `CullingController` uses the context to update visibility of all registered containers This keeps DOM element tracking in React-land rather than on the Editor, maintaining separation between the editor and rendering layers. ## Performance improvement With N shapes on canvas, this changes: - **Before**: N separate `EffectScheduler` instances, each subscribing to `getCulledShapes()` and running when it changes - **After**: 1 reactor that iterates only the registered containers, updating only those whose state changed This reduces subscription overhead from O(N) to O(1). ## Test plan 1. Run `yarn dev` and open the examples app 2. Create many shapes on the canvas 3. Pan around the canvas - shapes outside the viewport should still be culled (hidden) correctly 4. Shapes inside the viewport should remain visible 5. Verify no visual regressions when selecting, editing, or interacting with shapes Closes #7831 <!-- CURSOR_SUMMARY --> --- > [!NOTE] > **Medium Risk** > Changes the shape rendering pipeline by moving culling visibility updates to a shared registry and single reactor, which could cause shapes to incorrectly hide/show if registration or updates misfire. Also adjusts store string hashing cache behavior, which may affect memory/perf characteristics under load. > > **Overview** > **Centralizes shape culling display toggling.** Shapes now register their DOM containers via a new `ShapeCullingProvider`/`useShapeCulling` context, and `DefaultCanvas` runs a single `CullingController` reactor to apply `display: none/block` updates for all registered shapes (replacing per-shape culling reactors in `Shape.tsx`). > > **Reduces reactive churn in a few hot paths.** `notVisibleShapes` is rewritten to avoid unnecessary `Set` allocations when results don’t change, `CanvasShapeIndicators` replaces generic deep equality with a purpose-built comparator for its computed render data, and `useActivePeerIds$` switches to a `Set` equality comparator. > > **Tweaks store hashing behavior.** `ImmutableMap` now always uses `cachedHashString` for strings and removes cache size/length limits, changing caching/memory tradeoffs for string hashing. > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 1881c57. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY -->
| for (let i = 0; i < allShapes.length; i++) { | ||
| shape = allShapes[i] | ||
| if (visibleIds.has(shape.id)) continue | ||
| if (!editor.getShapeUtil(shape.type).canCull(shape)) continue |
There was a problem hiding this comment.
i think doing this canCull check twice might be a bit much.
you could still collect the id's above in the first loop, but in an array instead?
| if (!b.has(item)) return false | ||
| } | ||
| return true | ||
| } |
There was a problem hiding this comment.
agreed with Cursor, would be great to have a util file for this sort of thing
| const STRING_HASH_CACHE_MAX_SIZE = 255 | ||
| let STRING_HASH_CACHE_SIZE = 0 | ||
| let stringHashCache: Record<string, number> = {} | ||
| const stringHashCache: Record<string, number> = {} |
There was a problem hiding this comment.
i don't think we should make this unbounded. either use an LRU or have a max that's closer to our max shape count
| let hashed = stringHashCache[string] | ||
| if (hashed === undefined) { | ||
| hashed = hashString(string) | ||
| if (STRING_HASH_CACHE_SIZE === STRING_HASH_CACHE_MAX_SIZE) { |
There was a problem hiding this comment.
This could lead to memory issues. stringHashCache would never get cleaned up, would take in ids from shapes from different pages, even different documents. Let's at least set an upper bound, could even be some high number.
There was a problem hiding this comment.
also, in a SPA, i'm pretty sure this would presist across pages/boards, no? 😬
There was a problem hiding this comment.
Yeah, it definitely would. It's a global, so only a refresh would reset it.
| collaboratorIndicators: CollaboratorIndicatorData[] | ||
| } | ||
|
|
||
| function setsEqual<T>(a: Set<T>, b: Set<T>): boolean { |
There was a problem hiding this comment.
Same function here, let's reuse it instead of defining it again:
tldraw/packages/editor/src/lib/hooks/usePeerIds.ts
Lines 10 to 16 in 8edd404
| } | ||
|
|
||
| // Build the new Set (only when needed) | ||
| const nextValue = new Set<TLShapeId>() |
There was a problem hiding this comment.
This looks to me like we are only optimizing for the case where we have no changes?
- It speeds up the use case where not much work is needed (no shape visibility changes).
- It makes it slower for cases when work is needed (visibility changes) as we now have to allocate and have a more complex loop over more shapes.
In simple pans (which show / hide additional shapes) we will always do an allocation as well as another loop after this change. So it feels like we are making things worse for the case where we already have to do more work.
There was a problem hiding this comment.
Nice, I was also thinking about something like this 👍
|
|
||
| count++ | ||
| if (!hasDiff && !prevValue.has(shape.id)) { | ||
| hasDiff = true |
There was a problem hiding this comment.
We can break from the loop here. We detected a diff, we don't need to do the full count in this case as we already know we'll have to allocate.
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ❌ Deployment failed View logs |
image-pipeline-template | 8f1ddca | Feb 11 2026, 11:24 AM |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| return null | ||
| } | ||
|
|
||
| /** |
There was a problem hiding this comment.
ShapesWithSVGs crashes due to missing ShapeCullingProvider
Medium Severity
ShapesWithSVGs renders Shape components without wrapping them in a ShapeCullingProvider. Since Shape now unconditionally calls useShapeCulling(), which throws if the context is missing, enabling the debugSvg debug flag will cause a hard crash with "useShapeCulling must be used within ShapeCullingProvider". ShapesToDisplay wraps shapes in the provider, but ShapesWithSVGs doesn't.
Additional Locations (1)
| if (!b.has(item)) return false | ||
| } | ||
| return true | ||
| } |
There was a problem hiding this comment.
Duplicate setsEqual utility across two files
Low Severity
The setsEqual function is identically defined in both CanvasShapeIndicators.tsx and usePeerIds.ts. This duplication increases maintenance burden — a bug fix in one copy could easily be missed in the other. It would be better extracted into a shared utility.
Additional Locations (1)
| return hashNumber(v) | ||
| case 'string': | ||
| return v.length > STRING_HASH_CACHE_MIN_STRLEN ? cachedHashString(v) : hashString(v) | ||
| return cachedHashString(v) |
There was a problem hiding this comment.
Prototype property collision in string hash cache lookup
Low Severity
Routing all strings through cachedHashString now exposes stringHashCache to Object.prototype property collisions for short strings. Since stringHashCache is {} (not Object.create(null)), lookups like stringHashCache["toString"] return the inherited function instead of undefined, bypassing the === undefined check and returning a non-numeric hash. The old length guard (v.length > 16) coincidentally protected all common prototype property names. The sibling symbolMap already uses Object.create(null) to avoid this exact issue.


In order to improve rendering performance in large canvases and multiplayer rooms, this PR bundles three related performance optimizations that reduce memory allocations and computation overhead in hot paths.
Closes #7833
Dragging a single shape on a canvas with ~2.5K shapes:
Before:

After:

Changes (in order of gains)
Replace lodash
isEqualwith optimized Set comparisonslodash.isEqualdoesn't handle Sets efficiently - it likely converts them to arrays for deep comparison. With thousands of shapes in large multiplayer rooms, this caused significant overhead during the equality checks in computed signals. The new comparison functions iterate Sets directly with early-exit on size mismatch.Files:
CanvasShapeIndicators.tsx,usePeerIds.tsAvoid unnecessary Set allocations in
notVisibleShapesThe
notVisibleShapesderivation was creating a new Set on every recompute even when the result hadn't changed. This PR adds:This reduces GC pressure during panning and zooming.
File:
notVisibleShapes.tsSimplify string hash caching in ImmutableMap
Shape IDs (~14 characters) were bypassing the string hash cache because of a 16-character minimum threshold. This caused redundant hash computations on every
store.get(shapeId)call. The fix:File:
ImmutableMap.tsChange type
improvementTest plan
Release notes
Note
Medium Risk
Touches core rendering/culling and reactive equality logic; while behavior should be equivalent, mistakes could cause shapes/indicators to not update or to flicker under certain viewport/collaboration states.
Overview
Improves large-canvas rendering performance by centralizing shape culling: shapes now register their DOM containers into a new
ShapeCullingProvider, andDefaultCanvasruns a singleCullingControllerreactor to toggledisplayfor all culled shapes instead of eachShapeowning its own culling subscription.Reduces reactive overhead in multiplayer/indicator code by replacing deep equality (
isEqual) with purpose-built Set/array comparators for computed render data (CanvasShapeIndicators) and active peer IDs (useActivePeerIds$), and reduces allocations innotVisibleShapesvia fast paths and returning the previous Set when unchanged.Cuts store hashing work by changing
ImmutableMapto always cache string hashes (with a larger bounded cache), avoiding repeated hash computations for short string keys (e.g. shape IDs).Written by Cursor Bugbot for commit 8f1ddca. This will update automatically on new commits. Configure here.