Skip to content

bug(tools): read_overflow rejects id with overflow: prefix — LLM copies format literally #1868

@bug-ops

Description

@bug-ops

Summary

When tool output overflows (overflow.threshold exceeded), the overflow notice injected into context is:

[full output stored as overflow:63a6dc8e-0afd-4711-bbfe-a70318ce9237 — 2717 bytes, use read_overflow tool to retrieve]

The LLM parses this and calls read_overflow with id: "overflow:63a6dc8e-..." (including the overflow: prefix), which fails:

[error] invalid tool parameters: id must be a valid UUID

The LLM then retries with the bare UUID and succeeds — but this wastes one LLM turn and one tool call.

Reproduction

  1. Config: [tools.overflow] threshold = 1500, [tools] summarize_output = false
  2. Run shell command producing > 1500 chars output (e.g., 80-line Python print loop)
  3. Observe: first read_overflow call gets invalid tool parameters: id must be a valid UUID
  4. Observe: second call (with bare UUID) succeeds

Debug dump: 0001-response.txt shows "id": "overflow:63a6dc8e-0afd-4711-bbfe-a70318ce9237"

Root Cause

The notice format overflow:{uuid} is ambiguous — LLM cannot reliably distinguish the overflow: prefix from the UUID value.

Fix Options (pick one)

Option A (simplest): In read_overflow tool input validation, strip overflow: prefix if present before UUID parsing.

Option B: Change the notice format to separate the prefix clearly:
[full output stored — ID: {uuid} — {bytes} bytes, use read_overflow tool to retrieve]

Option C: Update read_overflow tool description: explicitly state "pass the UUID only, without the 'overflow:' prefix".

Option A is safest (no notice format change, handles both old and new calls). Option B is cleaner long-term.

Severity

Low — LLM self-corrects on retry, no data loss. Costs 1 extra tool call per overflow.

Verified

2026-03-15, v0.15.1. Config: overflow.threshold=1500, summarize_output=false. Model: gpt-5-mini.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingllmzeph-llm crate (Ollama, Claude)

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions