Skip to content

[Bug]: Telegram responses delivered word-by-word with severe performance degradation in 2026.2.15 #18269

@breti169-arch

Description

@breti169-arch

Summary

After upgrading from OpenClaw 2026.2.14 to 2026.2.15, Telegram responses are delivered word-by-word ("tröpfelnd") with severe performance degradation. The first word arrives immediately, but subsequent output is extremely slow, making the system practically unusable for Telegram interactions.

Steps to Reproduce

  1. Setup:

    • OpenClaw 2026.2.15 installed on Ubuntu 22.04 LTS (Hetzner vServer)
    • Telegram bot configured with standard settings
    • Default model: deepseek/deepseek-chat (cost optimization)
    • Vision model for screenshots: google/gemini-2.5-flash
  2. Configuration:

    "channels": {
      "telegram": {
        "enabled": true,
        "dmPolicy": "pairing",
        "groupPolicy": "allowlist",
        "streamMode": "partial"
      }
    }
  3. Reproduction:

    • Send any message to the Telegram bot
    • Observe response delivery pattern

Expected Behavior

  • Responses should be delivered in reasonable chunks or as complete messages
  • Performance should be similar to version 2026.2.14
  • Streaming should be smooth and responsive

Actual Behavior

  • Word-by-word delivery: Responses arrive one word at a time
  • Severe latency: Significant delays between words
  • Performance impact: Makes conversations extremely slow and frustrating
  • Regression: Version 2026.2.14 worked perfectly

Technical Analysis

Identified Code Changes

Through git analysis, two commits appear relevant:

  1. dddb1bc94 - "fix(telegram): fix streaming with extended thinking models overwriting previous messages"

    • Changes in src/telegram/bot-message-dispatch.ts
    • Adds logic to prevent error payloads from overwriting preview messages
    • Introduces hasStreamedMessage flag
  2. c62b90a2b - "fix(telegram): stop block streaming from splitting messages when streamMode is off"

    • Changes logic for disableBlockStreaming
    • Modified condition: draftStream || streamMode === "off" ? true : undefined

Hypothesis

The changes intended to prevent message overwriting may have introduced aggressive buffering or changed the streaming behavior, causing the word-by-word delivery.

Impact and Severity

  • Critical: Makes Telegram integration practically unusable
  • Regression: Working functionality in 2026.2.14 broken in 2026.2.15
  • User experience: Extremely poor, conversations become frustratingly slow

Workarounds Tested

  1. Revert to 2026.2.14: ✅ Works perfectly
  2. Change streamMode: Tested "partial", "block", "off" - ❌ No improvement
  3. Adjust blockStreaming: ❌ No improvement

Environment

  • OpenClaw version: 2026.2.15
  • Operating system: Ubuntu 22.04.5 LTS
  • Install method: npm
  • Telegram configuration: Standard bot setup
  • Models: deepseek/deepseek-chat (primary), google/gemini-2.5-flash (vision)

Additional Information

  • The issue occurs with all message types (text responses, image analyses)
  • No errors in logs, just extremely slow delivery
  • Direct API calls to models work normally (not a model performance issue)
  • Issue is specific to Telegram channel output

Request

Please investigate the streaming/buffering changes in 2026.2.15 and provide a fix to restore normal response delivery performance in Telegram.

Priority: High - this is a critical regression affecting core functionality.

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleMarked as stale due to inactivity

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions