perf: optimize search and reduce unnecessary re-renders#99
Conversation
…onents - Default to light metadata for session listing (skip full JSONL parsing) - Skip noise filtering when using light metadata (stat-only session listing) - Merge hasDisplayableContent into analyzeSessionFileMetadata single pass - Add stale-while-revalidate session cache for instant project switching - Wrap 6 key components in React.memo to prevent cascade re-renders (LinkedToolItem, ThinkingItem, TextItem, DisplayItemList, ExecutionTrace, SessionItem) Co-Authored-By: Claude Opus 4.6 <[email protected]>
… results Search was using remark markdown parsing (AST parse → HAST → tree walk) for every message on every query, causing multi-second freezes. Replaced with plain string.indexOf — the same approach VSCode uses. Key changes: - Replace findMarkdownSearchMatches with plain indexOf in both renderer store and main process SessionSearcher - Add item-scoped store selectors so only items WITH matches re-render - Add 300ms debounce on in-session search, 400ms on global search - Cache project list for 30s to avoid re-scanning disk on every query - Cap in-session matches at 500 to limit DOM elements - Increase search caches from 200 to 1000 entries - Increase search concurrency (16 sessions, 8 projects in parallel) Co-Authored-By: Claude Opus 4.6 <[email protected]>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the performance and responsiveness of the application's search functionality. By moving away from computationally intensive markdown parsing to a simpler string search, and by intelligently limiting UI re-renders to only affected components, the user experience for both in-session and global search is dramatically improved. Additionally, optimizations in data loading and caching reduce disk I/O and boost search concurrency, leading to near-instantaneous results even in large sessions. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This is an excellent pull request that delivers significant performance improvements across the entire application. The optimizations are well-thought-out, addressing bottlenecks in search, data fetching, and UI rendering. Replacing expensive markdown parsing with indexOf for search, implementing item-scoped store selectors to prevent unnecessary re-renders, and adding caching at various levels (project list, session list) are all high-impact changes. The code is clean, and the updates to tests to reflect the new behavior are appreciated. I have a couple of minor suggestions to improve consistency in search query handling.
| } | ||
|
|
||
| debounceRef.current = setTimeout(() => { | ||
| setSearchQuery(value, conversation); |
There was a problem hiding this comment.
To ensure consistent search behavior and avoid issues with leading/trailing whitespace, it's best to trim the query before passing it to the store. This aligns with the global search behavior in SessionSearcher.
| setSearchQuery(value, conversation); | |
| setSearchQuery(value.trim(), conversation); |
| if (localQuery !== searchQuery) { | ||
| setSearchQuery(localQuery, conversation); |
There was a problem hiding this comment.
Similarly to the debounced search, the query should be trimmed here to ensure the search executed on 'Enter' is consistent and handles whitespace correctly. The condition should also check the trimmed query against the current search query.
| if (localQuery !== searchQuery) { | |
| setSearchQuery(localQuery, conversation); | |
| if (localQuery.trim() !== searchQuery) { | |
| setSearchQuery(localQuery.trim(), conversation); | |
| } |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5c7f921ec3
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // Flush any pending debounce immediately on Enter | ||
| clearTimeout(debounceRef.current); | ||
| if (localQuery !== searchQuery) { | ||
| setSearchQuery(localQuery, conversation); |
There was a problem hiding this comment.
Cancel pending debounce before closing search
The new debounce path only clears pending timers on Enter, so if a user types and immediately closes find (Esc or the close button), the queued setSearchQuery still fires after 300ms and re-populates hidden search state/highlights. This regression comes from introducing delayed dispatch in handleChange without canceling it on close; closing search should clear debounceRef before calling hideSearch to prevent stale queries from being restored.
Useful? React with 👍 / 👎.
| includeTotalCount: false, | ||
| prefilterAll: false, | ||
| metadataLevel: connectionMode === 'ssh' ? 'light' : 'deep', | ||
| metadataLevel: 'light', |
There was a problem hiding this comment.
Preserve deep metadata for local session lists
Hard-coding metadataLevel: 'light' for the main session fetch drops fields that the sidebar depends on (buildLightSessionMetadata returns messageCount: 0 and no contextConsumption), so local users lose accurate message counts and the “Sort by context consumption” mode effectively stops working. This is a behavior regression from the previous local 'deep' fetch path and should keep deep metadata (or a compatible equivalent) where those UI features are used.
Useful? React with 👍 / 👎.
| const cacheProjectId = get().selectedProjectId; | ||
| if (cacheProjectId) { | ||
| get()._sessionCache.set(cacheProjectId, { |
There was a problem hiding this comment.
Key session cache writes by requested project
After fetchSessionsInitial(projectId) resolves, the cache is written under selectedProjectId instead of the original projectId. If the user switches projects while the request is in flight, this can store project A’s results under project B’s cache key; selectProject then immediately hydrates B from corrupted cache before refresh, showing the wrong session list. Cache writes here should use the function argument (and ideally ignore stale responses).
Useful? React with 👍 / 👎.
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughThis PR implements search optimization and caching enhancements across discovery, state management, and UI rendering. Changes include introducing TTL-based project scan caching, replacing markdown-based search with plain-text matching, capping search results at 500 matches, adding per-item search tracking to reduce component re-renders, memoizing multiple chat components, implementing session state caching per project, and adjusting metadata levels to 'light' for SSH paths. Changes
Possibly related PRs
Suggested labels
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary
indexOffor search — the biggest bottleneck was running full AST parsing (remark → HAST → tree walk) for every message on every keystroke. Now uses the same approach as VSCode.Test plan
pnpm test— all 652 tests pass🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Performance