Skip to content

fix(channels): /reset in Telegram mode triggers full LLM inference causing late-reply leaks #2339

@bug-ops

Description

@bug-ops

Summary

In Telegram mode, the /reset command triggers full LLM inference (~10-13 seconds) instead of a quick context reset. This causes two problems:

  1. Poor UX: user sends /reset and waits 10+ seconds for a response
  2. E2E test failures: the delayed LLM response leaks into subsequent scenarios (captured by timestamp-based handlers), causing false failures

Reproduction

Run telegram_e2e.py and observe:

  • The pre-test /reset takes >10s, E2E times out at 10s
  • The delayed response arrives during scenario_startup (after /start is sent)
  • The startup scenario captures the /reset LLM response as the startup reply → FAIL
  • Same pattern: scenario_reset's /reset response leaks into scenario_skills

Expected Behavior

/reset should immediately clear conversation context and return a short confirmation (e.g., "Conversation reset.") without LLM inference.

Actual Behavior

/reset text is forwarded to the agent loop → LLM responds with "I cannot perform a reset operation directly. If you want to clear the current se..." after 10-13 seconds.

Root Cause

TelegramChannel.recv() returns the /reset text verbatim to the agent runner. The runner passes it to the LLM pipeline instead of handling it as a command. The CLI mode may handle this differently (needs investigation).

Impact

  • E2E startup and skills scenario false failures
  • Telegram UX: 10-13s response time for /reset

Metadata

Metadata

Assignees

Labels

P2High value, medium complexitybugSomething isn't workingchannelszeph-channels crate (Telegram)llmzeph-llm crate (Ollama, Claude)

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions