-
-
Notifications
You must be signed in to change notification settings - Fork 68.9k
[Bug]: Telegram responses delivered word-by-word with severe performance degradation in 2026.2.15 #18269
Description
Summary
After upgrading from OpenClaw 2026.2.14 to 2026.2.15, Telegram responses are delivered word-by-word ("tröpfelnd") with severe performance degradation. The first word arrives immediately, but subsequent output is extremely slow, making the system practically unusable for Telegram interactions.
Steps to Reproduce
-
Setup:
- OpenClaw 2026.2.15 installed on Ubuntu 22.04 LTS (Hetzner vServer)
- Telegram bot configured with standard settings
- Default model:
deepseek/deepseek-chat(cost optimization) - Vision model for screenshots:
google/gemini-2.5-flash
-
Configuration:
"channels": { "telegram": { "enabled": true, "dmPolicy": "pairing", "groupPolicy": "allowlist", "streamMode": "partial" } }
-
Reproduction:
- Send any message to the Telegram bot
- Observe response delivery pattern
Expected Behavior
- Responses should be delivered in reasonable chunks or as complete messages
- Performance should be similar to version 2026.2.14
- Streaming should be smooth and responsive
Actual Behavior
- Word-by-word delivery: Responses arrive one word at a time
- Severe latency: Significant delays between words
- Performance impact: Makes conversations extremely slow and frustrating
- Regression: Version 2026.2.14 worked perfectly
Technical Analysis
Identified Code Changes
Through git analysis, two commits appear relevant:
-
dddb1bc94- "fix(telegram): fix streaming with extended thinking models overwriting previous messages"- Changes in
src/telegram/bot-message-dispatch.ts - Adds logic to prevent error payloads from overwriting preview messages
- Introduces
hasStreamedMessageflag
- Changes in
-
c62b90a2b- "fix(telegram): stop block streaming from splitting messages when streamMode is off"- Changes logic for
disableBlockStreaming - Modified condition:
draftStream || streamMode === "off" ? true : undefined
- Changes logic for
Hypothesis
The changes intended to prevent message overwriting may have introduced aggressive buffering or changed the streaming behavior, causing the word-by-word delivery.
Impact and Severity
- Critical: Makes Telegram integration practically unusable
- Regression: Working functionality in 2026.2.14 broken in 2026.2.15
- User experience: Extremely poor, conversations become frustratingly slow
Workarounds Tested
- Revert to 2026.2.14: ✅ Works perfectly
- Change streamMode: Tested "partial", "block", "off" - ❌ No improvement
- Adjust blockStreaming: ❌ No improvement
Environment
- OpenClaw version: 2026.2.15
- Operating system: Ubuntu 22.04.5 LTS
- Install method: npm
- Telegram configuration: Standard bot setup
- Models: deepseek/deepseek-chat (primary), google/gemini-2.5-flash (vision)
Additional Information
- The issue occurs with all message types (text responses, image analyses)
- No errors in logs, just extremely slow delivery
- Direct API calls to models work normally (not a model performance issue)
- Issue is specific to Telegram channel output
Request
Please investigate the streaming/buffering changes in 2026.2.15 and provide a fix to restore normal response delivery performance in Telegram.
Priority: High - this is a critical regression affecting core functionality.