web UI: fix context notice using accumulated inputTokens instead of p…#51721
Merged
web UI: fix context notice using accumulated inputTokens instead of p…#51721
Conversation
…rompt snapshot The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor
Contributor
Greptile SummaryThis PR fixes a false-positive "context used" warning in the web UI chat view. The one-line change in
Confidence Score: 5/5
Last reviewed commit: "web UI: fix context ..." |
MaheshBhushan
pushed a commit
to MaheshBhushan/openclaw
that referenced
this pull request
Mar 21, 2026
…rompt snapshot (openclaw#51721) The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor
mrosmarin
added a commit
to mrosmarin/openclaw
that referenced
this pull request
Mar 21, 2026
* main: (516 commits) fix: use content hash for memory flush dedup instead of compactionCount (openclaw#30115) (openclaw#34222) fix(tts): add matrix to VOICE_BUBBLE_CHANNELS (openclaw#37080) feat(memory): pluggable system prompt section for memory plugins (openclaw#40126) fix: detect nvm services from installed command (openclaw#51146) fix: handle Linux nvm CA env before startup (openclaw#51146) (thanks @GodsBoy) refactor: route Telegram runtime through plugin sdk (openclaw#51772) refactor: route iMessage runtime through plugin sdk (openclaw#51770) refactor: route Slack runtime through plugin sdk (openclaw#51766) refactor(doctor): extract provider and shared config helpers (openclaw#51753) Fix Discord `/codex_resume` picker expiration (openclaw#51260) fix(ci): remove duplicate embedding default export fix(ci): restore embedding defaults and plugin boundaries fix: compaction safeguard summary budget (openclaw#27727) web UI: fix context notice using accumulated inputTokens instead of prompt snapshot (openclaw#51721) fix(status): skip cold-start status probes refactor(doctor): extract telegram provider warnings (openclaw#51704) fix(telegram): default fresh setups to mention-gated groups docs(changelog): note telegram doctor first-run guidance fix(doctor): add telegram first-run guidance fix(doctor): suppress telegram fresh-install group warning ...
JohnJAS
pushed a commit
to JohnJAS/openclaw
that referenced
this pull request
Mar 22, 2026
…rompt snapshot (openclaw#51721) The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor
pholpaphankorn
pushed a commit
to pholpaphankorn/openclaw
that referenced
this pull request
Mar 22, 2026
…rompt snapshot (openclaw#51721) The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor
frankekn
pushed a commit
to artwalker/openclaw
that referenced
this pull request
Mar 23, 2026
…rompt snapshot (openclaw#51721) The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor
furaul
pushed a commit
to furaul/openclaw
that referenced
this pull request
Mar 24, 2026
…rompt snapshot (openclaw#51721) The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor
alexey-pelykh
pushed a commit
to remoteclaw/remoteclaw
that referenced
this pull request
Mar 25, 2026
…rompt snapshot (openclaw#51721) The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor (cherry picked from commit 7c520cc)
alexey-pelykh
pushed a commit
to remoteclaw/remoteclaw
that referenced
this pull request
Mar 25, 2026
…rompt snapshot (openclaw#51721) The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%). Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available. Made-with: Cursor (cherry picked from commit 7c520cc)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
web UI: fix context notice using accumulated inputTokens instead of prompt snapshot
The context-usage banner in the web UI fell back to inputTokens when totalTokens was missing. inputTokens is accumulated across all API calls in a run (tool-use loops, compaction retries), so it overstates actual context window utilization -- e.g. showing "100% context used 757.3k / 200k" when the real prompt snapshot is only 46k/200k (23%).
Drop the inputTokens fallback so the banner only fires when a genuine prompt snapshot (totalTokens) is available.
Summary
inputTokens(accumulated across all API calls in a run) whentotalTokens(prompt snapshot) was missing, showing false alarms like "100% context used 757.3k / 200k".inputTokensfallback inrenderContextNoticeso the banner only fires when a genuinetotalTokensprompt snapshot is available. Added a test covering the missing-totalTokensedge case.Context: 46k/200k (23%)) was already correct and is untouched. Token accumulation, session storage, and per-message context % in grouped-render are unchanged.Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
User-visible / Behavior Changes
totalTokensis unavailable, instead of falsely reporting accumulatedinputTokensas context utilization.totalTokensis present and genuinely high (>=85% of context window), the banner continues to display correctly.Security Impact (required)
NoNoNoNoNoRepro + Verification
Environment
Steps
inputTokensaccumulates well past 200k (e.g. 757k).Expected
Actual (before fix)
Evidence
Human Verification (required)
totalTokensundefined with highinputTokens;totalTokenspresent and above 85% threshold;totalTokenspresent but below threshold.src/tts/tts.tsformat failure onmainunrelated to this change.Review Conversations
Compatibility / Migration
YesNoNoFailure Recovery (if this breaks)
ui/src/ui/views/chat.tsline 258 to restore?? session?.inputTokens.ui/src/ui/views/chat.tstotalTokensto be permanently missing for active sessions — unlikely given session-usage persistence).Risks and Mitigations
totalTokensis frequentlyundefinedfor active sessions, users lose the context-full warning entirely.totalTokensis populated byderiveSessionTotalTokenson every reply withlastCallUsageorpromptTokens. It is only missing on cold/stale sessions where context utilization is unknown anyway — showing nothing is more correct than showing a false alarm.