fix(channels): handle /reset as builtin command, fix long_output e2e timeout#2344
Merged
fix(channels): handle /reset as builtin command, fix long_output e2e timeout#2344
Conversation
…timeout Fixes #2339: /reset is now intercepted in handle_builtin_command before reaching the LLM inference path. Clears history, tool cache, and user_provided_urls identically to /clear, then sends "Conversation history reset." confirmation. Registered in COMMANDS under SlashCategory::Session so it appears in /help. Fixes #2340: reduce scenario_long_output prompt from 400 to 100 items and first_timeout from 90s to 60s. 100 items still produces >4096 chars, satisfying the len(replies) >= 2 assertion without exceeding the LLM timeout.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
/resetcommand in Telegram (and all channels) was forwarded verbatim to the LLM pipeline, causing 10-13s response times. Now intercepted inhandle_builtin_commandbefore LLM inference — clears history, tool cache, and user URLs, then sends immediate confirmation.scenario_long_outputE2E test requested 400 items (~8k tokens), which exceeded the 120s LLM timeout under load. Reduced to 100 items (still >4096 chars, ≥2 messages) with timeout adjusted to 60s.Changes
crates/zeph-core/src/agent/mod.rs—/resethandler added tohandle_builtin_commandafter the/clearblockcrates/zeph-core/src/agent/slash_commands.rs—/resetregistered underSlashCategory::Sessionfor/helpvisibilityscripts/telegram-e2e/telegram_e2e.py—scenario_long_outputreduced from 400→100 items,first_timeout90s→60sCHANGELOG.md— updated[Unreleased]Test plan
cargo +nightly fmt --check— cleancargo clippy -p zeph-core -- -D warnings— zero warningscargo nextest run --features full --lib --bins— 6965/6965 passed/resetreturns "Conversation history reset." immediately without LLM inferencelong_outputscenario passes with ≥2 messages within 60s timeout