-
Notifications
You must be signed in to change notification settings - Fork 2
bug(tools): read_overflow rejects id with overflow: prefix — LLM copies format literally #1868
Description
Summary
When tool output overflows (overflow.threshold exceeded), the overflow notice injected into context is:
[full output stored as overflow:63a6dc8e-0afd-4711-bbfe-a70318ce9237 — 2717 bytes, use read_overflow tool to retrieve]
The LLM parses this and calls read_overflow with id: "overflow:63a6dc8e-..." (including the overflow: prefix), which fails:
[error] invalid tool parameters: id must be a valid UUID
The LLM then retries with the bare UUID and succeeds — but this wastes one LLM turn and one tool call.
Reproduction
- Config:
[tools.overflow] threshold = 1500,[tools] summarize_output = false - Run shell command producing > 1500 chars output (e.g., 80-line Python print loop)
- Observe: first
read_overflowcall getsinvalid tool parameters: id must be a valid UUID - Observe: second call (with bare UUID) succeeds
Debug dump: 0001-response.txt shows "id": "overflow:63a6dc8e-0afd-4711-bbfe-a70318ce9237"
Root Cause
The notice format overflow:{uuid} is ambiguous — LLM cannot reliably distinguish the overflow: prefix from the UUID value.
Fix Options (pick one)
Option A (simplest): In read_overflow tool input validation, strip overflow: prefix if present before UUID parsing.
Option B: Change the notice format to separate the prefix clearly:
[full output stored — ID: {uuid} — {bytes} bytes, use read_overflow tool to retrieve]
Option C: Update read_overflow tool description: explicitly state "pass the UUID only, without the 'overflow:' prefix".
Option A is safest (no notice format change, handles both old and new calls). Option B is cleaner long-term.
Severity
Low — LLM self-corrects on retry, no data loss. Costs 1 extra tool call per overflow.
Verified
2026-03-15, v0.15.1. Config: overflow.threshold=1500, summarize_output=false. Model: gpt-5-mini.