-
Notifications
You must be signed in to change notification settings - Fork 2
fix(channels): Telegram long_output fails with LLM timeout on 400-item list prompt #2340
Copy link
Copy link
Closed
Labels
P2High value, medium complexityHigh value, medium complexitybugSomething isn't workingSomething isn't workingchannelszeph-channels crate (Telegram)zeph-channels crate (Telegram)llmzeph-llm crate (Ollama, Claude)zeph-llm crate (Ollama, Claude)
Description
Summary
The long_output E2E scenario fails with "LLM request timed out. Please try again." when requesting a 400-item numbered list from the Telegram channel.
Reproduction
Run telegram_e2e.py → long_output scenario sends:
"Write a numbered list from 1 to 400, one item per line..."
Actual Behavior
[FAIL] long_output: 1 message(s), first='LLM request timed out. Please try again.'
The bot receives a timeout message and returns it to the user. Only 1 message received (no multi-message chunking test possible).
Expected Behavior
The bot should produce ≥2 messages (>4096 chars), demonstrating the utf8_chunks splitting in send().
Configuration
[timeouts] llm_seconds = 120- Model:
gpt-4o-mini - Output would be ~16,000 chars (400 items × ~40 chars each)
Notes
- gpt-4o-mini generating 400 items × 40 chars ≈ 8,000+ tokens at ~50 tok/s would take ~160s → exceeds 120s timeout
- Either increase
llm_secondsfor the testing config, or use a shorter list in the E2E test (e.g. 100 items = ~2 messages) - Also confirms that
long_outputchunking (>4096 chars) is NOT yet live-tested
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
P2High value, medium complexityHigh value, medium complexitybugSomething isn't workingSomething isn't workingchannelszeph-channels crate (Telegram)zeph-channels crate (Telegram)llmzeph-llm crate (Ollama, Claude)zeph-llm crate (Ollama, Claude)