Skip to content

perf(editor): reduce allocations and hash computation overhead#7840

Merged
steveruizok merged 7 commits intomainfrom
performance-stack-1
Feb 11, 2026
Merged

perf(editor): reduce allocations and hash computation overhead#7840
steveruizok merged 7 commits intomainfrom
performance-stack-1

Conversation

@steveruizok
Copy link
Copy Markdown
Collaborator

@steveruizok steveruizok commented Feb 4, 2026

In order to improve rendering performance in large canvases and multiplayer rooms, this PR bundles three related performance optimizations that reduce memory allocations and computation overhead in hot paths.

Closes #7833

Dragging a single shape on a canvas with ~2.5K shapes:

Before:
image

After:
image

Changes (in order of gains)

Replace lodash isEqual with optimized Set comparisons

lodash.isEqual doesn't handle Sets efficiently - it likely converts them to arrays for deep comparison. With thousands of shapes in large multiplayer rooms, this caused significant overhead during the equality checks in computed signals. The new comparison functions iterate Sets directly with early-exit on size mismatch.

Files: CanvasShapeIndicators.tsx, usePeerIds.ts

Avoid unnecessary Set allocations in notVisibleShapes

The notVisibleShapes derivation was creating a new Set on every recompute even when the result hadn't changed. This PR adds:

  • A fast path returning a cached empty set when all shapes are visible
  • Change detection before allocating a new Set
  • Return of the previous value when contents are unchanged

This reduces GC pressure during panning and zooming.

File: notVisibleShapes.ts

Simplify string hash caching in ImmutableMap

Shape IDs (~14 characters) were bypassing the string hash cache because of a 16-character minimum threshold. This caused redundant hash computations on every store.get(shapeId) call. The fix:

  • Caches all strings regardless of length
  • Removes cache eviction logic (strings are bounded by document size)

File: ImmutableMap.ts

Change type

  • improvement

Test plan

  1. Open a canvas with many shapes (100+)
  2. Pan and zoom around - should feel smoother
  3. In a multiplayer room with multiple users, verify no regressions
  • Unit tests
  • End to end tests

Release notes

  • Improve performance in large canvases and multiplayer rooms by optimizing Set comparisons, reducing memory allocations, and caching string hashes

Note

Medium Risk
Touches core rendering/culling and reactive equality logic; while behavior should be equivalent, mistakes could cause shapes/indicators to not update or to flicker under certain viewport/collaboration states.

Overview
Improves large-canvas rendering performance by centralizing shape culling: shapes now register their DOM containers into a new ShapeCullingProvider, and DefaultCanvas runs a single CullingController reactor to toggle display for all culled shapes instead of each Shape owning its own culling subscription.

Reduces reactive overhead in multiplayer/indicator code by replacing deep equality (isEqual) with purpose-built Set/array comparators for computed render data (CanvasShapeIndicators) and active peer IDs (useActivePeerIds$), and reduces allocations in notVisibleShapes via fast paths and returning the previous Set when unchanged.

Cuts store hashing work by changing ImmutableMap to always cache string hashes (with a larger bounded cache), avoiding repeated hash computations for short string keys (e.g. shape IDs).

Written by Cursor Bugbot for commit 8f1ddca. This will update automatically on new commits. Configure here.

steveruizok and others added 3 commits February 4, 2026 17:51
Closes #7833. Previously, once
the number of shapes grew to 256, we were creating new hashes for all
shapes on every frame.

## Summary

- Cache all strings regardless of length (removed 16-char minimum
threshold)
- Remove cache eviction logic (no more 255 entry limit and reset)
- Simplify to unbounded cache since shape IDs are finite per document

The previous implementation avoided caching short strings and had an
eviction policy. However, in tldraw's use case, the number of unique
string keys is bounded by the document's shape/record count, so an
unbounded cache is safe and reduces overhead.

## Test plan
- [x] Typecheck passes
- [ ] Manual testing with large documents

🤖 Generated with [Claude Code](https://claude.ai/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Changes the hashing hot-path used by `ImmutableMap` to always cache
string hashes and removes cache eviction, which could increase memory
use or alter performance characteristics on workloads with many unique
strings.
> 
> **Overview**
> **Simplifies string hash caching in `ImmutableMap`** by always routing
string hashing through `cachedHashString` (removing the short-string
threshold).
> 
> Removes the cache size/eviction logic and related constants, making
`stringHashCache` an unbounded in-memory map that retains all hashed
strings for the process lifetime.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e3fefe0. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Claude Opus 4.5 <[email protected]>
…7837)

## Summary
- Add fast path when all shapes are visible (return cached empty set)
- On first run, compute from scratch
- On subsequent runs, check if result differs before creating new Set
- Only allocate new Set when contents actually changed

This reduces GC pressure by avoiding unnecessary Set allocations on
every frame.

## Test plan
- [x] Typecheck passes
- [ ] Manual testing with many shapes on canvas

🤖 Generated with [Claude Code](https://claude.ai/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Low Risk**
> Low risk performance refactor limited to `notVisibleShapes`
derivation; main risk is subtle cache/identity behavior affecting
downstream consumers if the computed value no longer updates when
expected.
> 
> **Overview**
> Optimizes `notVisibleShapes` to reduce GC pressure by **reusing Set
instances** instead of allocating a new `Set` every recompute.
> 
> Adds a fast path that returns a cached empty `Set` when all shapes are
visible, computes from scratch on the first run, and on subsequent runs
scans to detect changes before building a new `Set` (returning
`prevValue` when contents are unchanged).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e74207f. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Claude Opus 4.5 <[email protected]>
…7839)

In order to fix a major performance bottleneck in large multiplayer
rooms, this PR replaces expensive lodash deep equality checks with
custom equality functions optimized for Sets.

lodash's `isEqual` doesn't handle Sets efficiently - it likely converts
them to arrays for deep comparison. With thousands of shapes in large
multiplayer rooms, this was causing significant performance overhead on
every frame during the equality checks in computed signals.

### Change type

- [x] `improvement`

### Test plan

1. Open a large multiplayer room with many shapes
2. Check flame graph - `isEqual` should no longer appear in hot paths

- [ ] Unit tests
- [ ] End to end tests

### Release notes

- Improve performance in large multiplayer rooms by replacing lodash
deep equality with optimized Set comparisons

Co-authored-by: Claude Opus 4.5 <[email protected]>
@huppy-bot huppy-bot bot added the improvement Product improvement label Feb 4, 2026
@vercel
Copy link
Copy Markdown

vercel bot commented Feb 4, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
examples Ready Ready Preview Feb 11, 2026 11:27am
5 Skipped Deployments
Project Deployment Actions Updated (UTC)
analytics Ignored Ignored Preview Feb 11, 2026 11:27am
chat-template Ignored Ignored Preview Feb 11, 2026 11:27am
tldraw-docs Ignored Ignored Preview Feb 11, 2026 11:27am
tldraw-shader Ignored Ignored Preview Feb 11, 2026 11:27am
workflow-template Ignored Ignored Preview Feb 11, 2026 11:27am

Request Review

if (!b.has(item)) return false
}
return true
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated setsEqual function across two files

Medium Severity

The setsEqual function is defined identically in both usePeerIds.ts and CanvasShapeIndicators.tsx. This duplication increases maintenance burden—bug fixes or optimizations must be applied in both places. Consider extracting this to a shared utility module.

Fix in Cursor Fix in Web

## Summary

- Creates a `ShapeCullingProvider` React context that maintains a
registry of shape container refs
- Replaces per-shape `useQuickReactor` subscriptions with a single
centralized reactor in `ShapesToDisplay`
- Each Shape component registers its container refs on mount via the
context and unregisters on unmount
- The centralized `CullingController` uses the context to update
visibility of all registered containers

This keeps DOM element tracking in React-land rather than on the Editor,
maintaining separation between the editor and rendering layers.

## Performance improvement

With N shapes on canvas, this changes:
- **Before**: N separate `EffectScheduler` instances, each subscribing
to `getCulledShapes()` and running when it changes
- **After**: 1 reactor that iterates only the registered containers,
updating only those whose state changed

This reduces subscription overhead from O(N) to O(1).

## Test plan

1. Run `yarn dev` and open the examples app
2. Create many shapes on the canvas
3. Pan around the canvas - shapes outside the viewport should still be
culled (hidden) correctly
4. Shapes inside the viewport should remain visible
5. Verify no visual regressions when selecting, editing, or interacting
with shapes

Closes #7831

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Changes the shape rendering pipeline by moving culling visibility
updates to a shared registry and single reactor, which could cause
shapes to incorrectly hide/show if registration or updates misfire. Also
adjusts store string hashing cache behavior, which may affect
memory/perf characteristics under load.
> 
> **Overview**
> **Centralizes shape culling display toggling.** Shapes now register
their DOM containers via a new `ShapeCullingProvider`/`useShapeCulling`
context, and `DefaultCanvas` runs a single `CullingController` reactor
to apply `display: none/block` updates for all registered shapes
(replacing per-shape culling reactors in `Shape.tsx`).
> 
> **Reduces reactive churn in a few hot paths.** `notVisibleShapes` is
rewritten to avoid unnecessary `Set` allocations when results don’t
change, `CanvasShapeIndicators` replaces generic deep equality with a
purpose-built comparator for its computed render data, and
`useActivePeerIds$` switches to a `Set` equality comparator.
> 
> **Tweaks store hashing behavior.** `ImmutableMap` now always uses
`cachedHashString` for strings and removes cache size/length limits,
changing caching/memory tradeoffs for string hashing.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1881c57. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
for (let i = 0; i < allShapes.length; i++) {
shape = allShapes[i]
if (visibleIds.has(shape.id)) continue
if (!editor.getShapeUtil(shape.type).canCull(shape)) continue
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think doing this canCull check twice might be a bit much.
you could still collect the id's above in the first loop, but in an array instead?

if (!b.has(item)) return false
}
return true
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed with Cursor, would be great to have a util file for this sort of thing

const STRING_HASH_CACHE_MAX_SIZE = 255
let STRING_HASH_CACHE_SIZE = 0
let stringHashCache: Record<string, number> = {}
const stringHashCache: Record<string, number> = {}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think we should make this unbounded. either use an LRU or have a max that's closer to our max shape count

let hashed = stringHashCache[string]
if (hashed === undefined) {
hashed = hashString(string)
if (STRING_HASH_CACHE_SIZE === STRING_HASH_CACHE_MAX_SIZE) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could lead to memory issues. stringHashCache would never get cleaned up, would take in ids from shapes from different pages, even different documents. Let's at least set an upper bound, could even be some high number.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, in a SPA, i'm pretty sure this would presist across pages/boards, no? 😬

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it definitely would. It's a global, so only a refresh would reset it.

collaboratorIndicators: CollaboratorIndicatorData[]
}

function setsEqual<T>(a: Set<T>, b: Set<T>): boolean {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same function here, let's reuse it instead of defining it again:

function setsEqual<T>(a: Set<T>, b: Set<T>): boolean {
if (a.size !== b.size) return false
for (const item of a) {
if (!b.has(item)) return false
}
return true
}

}

// Build the new Set (only when needed)
const nextValue = new Set<TLShapeId>()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks to me like we are only optimizing for the case where we have no changes?

  1. It speeds up the use case where not much work is needed (no shape visibility changes).
  2. It makes it slower for cases when work is needed (visibility changes) as we now have to allocate and have a more complex loop over more shapes.

In simple pans (which show / hide additional shapes) we will always do an allocation as well as another loop after this change. So it feels like we are making things worse for the case where we already have to do more work.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I was also thinking about something like this 👍


count++
if (!hasDiff && !prevValue.has(shape.id)) {
hasDiff = true
Copy link
Copy Markdown
Contributor

@MitjaBezensek MitjaBezensek Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can break from the loop here. We detected a diff, we don't need to do the full count in this case as we already know we'll have to allocate.

@steveruizok steveruizok added performance Improve performance of an existing feature sdk Affects the tldraw sdk labels Feb 6, 2026
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages bot commented Feb 11, 2026

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Updated (UTC)
❌ Deployment failed
View logs
image-pipeline-template 8f1ddca Feb 11 2026, 11:24 AM

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 3 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

return null
}

/**
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ShapesWithSVGs crashes due to missing ShapeCullingProvider

Medium Severity

ShapesWithSVGs renders Shape components without wrapping them in a ShapeCullingProvider. Since Shape now unconditionally calls useShapeCulling(), which throws if the context is missing, enabling the debugSvg debug flag will cause a hard crash with "useShapeCulling must be used within ShapeCullingProvider". ShapesToDisplay wraps shapes in the provider, but ShapesWithSVGs doesn't.

Additional Locations (1)

Fix in Cursor Fix in Web

if (!b.has(item)) return false
}
return true
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate setsEqual utility across two files

Low Severity

The setsEqual function is identically defined in both CanvasShapeIndicators.tsx and usePeerIds.ts. This duplication increases maintenance burden — a bug fix in one copy could easily be missed in the other. It would be better extracted into a shared utility.

Additional Locations (1)

Fix in Cursor Fix in Web

return hashNumber(v)
case 'string':
return v.length > STRING_HASH_CACHE_MIN_STRLEN ? cachedHashString(v) : hashString(v)
return cachedHashString(v)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prototype property collision in string hash cache lookup

Low Severity

Routing all strings through cachedHashString now exposes stringHashCache to Object.prototype property collisions for short strings. Since stringHashCache is {} (not Object.create(null)), lookups like stringHashCache["toString"] return the inherited function instead of undefined, bypassing the === undefined check and returning a non-numeric hash. The old length guard (v.length > 16) coincidentally protected all common prototype property names. The sibling symbolMap already uses Object.create(null) to avoid this exact issue.

Additional Locations (1)

Fix in Cursor Fix in Web

@steveruizok steveruizok added this pull request to the merge queue Feb 11, 2026
Merged via the queue into main with commit 69d5405 Feb 11, 2026
19 of 20 checks passed
@steveruizok steveruizok deleted the performance-stack-1 branch February 11, 2026 11:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Product improvement performance Improve performance of an existing feature sdk Affects the tldraw sdk

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ImmutableMap hashes short string keys on every lookup

3 participants