Skip to content

fix(channels): Telegram long_output fails with LLM timeout on 400-item list prompt #2340

@bug-ops

Description

@bug-ops

Summary

The long_output E2E scenario fails with "LLM request timed out. Please try again." when requesting a 400-item numbered list from the Telegram channel.

Reproduction

Run telegram_e2e.pylong_output scenario sends:

"Write a numbered list from 1 to 400, one item per line..."

Actual Behavior

[FAIL] long_output: 1 message(s), first='LLM request timed out. Please try again.'

The bot receives a timeout message and returns it to the user. Only 1 message received (no multi-message chunking test possible).

Expected Behavior

The bot should produce ≥2 messages (>4096 chars), demonstrating the utf8_chunks splitting in send().

Configuration

  • [timeouts] llm_seconds = 120
  • Model: gpt-4o-mini
  • Output would be ~16,000 chars (400 items × ~40 chars each)

Notes

  • gpt-4o-mini generating 400 items × 40 chars ≈ 8,000+ tokens at ~50 tok/s would take ~160s → exceeds 120s timeout
  • Either increase llm_seconds for the testing config, or use a shorter list in the E2E test (e.g. 100 items = ~2 messages)
  • Also confirms that long_output chunking (>4096 chars) is NOT yet live-tested

Metadata

Metadata

Assignees

Labels

P2High value, medium complexitybugSomething isn't workingchannelszeph-channels crate (Telegram)llmzeph-llm crate (Ollama, Claude)

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions