Skip to content

Telegram streaming "partial" creates duplicate messages due to generation counter race #39795

@MarcoPambianchi

Description

@MarcoPambianchi

Description

When using streaming: "partial" on Telegram channels, users see duplicate messages: the response appears twice briefly, then one copy is deleted. This happens consistently, especially on mobile where both notifications are visible.

Environment

  • OpenClaw version: 2026.3.2
  • Platform: Linux (WSL2)
  • Telegram channel with 5 bot accounts
  • streaming: "partial" on all accounts

Steps to Reproduce

  1. Configure a Telegram account with "streaming": "partial" in openclaw.json
  2. Send a message to the bot from Telegram mobile
  3. Observe: the response message appears, then a second identical message appears, then the first one is deleted

Root Cause Analysis

Traced through pi-embedded-CtM2Mrrj.js:

  1. Stream starts → Preview message fix: add @lid format support and allowFrom wildcard handling #1 (M1) sent via sendMessage() (generation=0)
  2. onAssistantMessageStart fires (new response segment):
    • M1 is saved to archivedAnswerPreviews
    • forceNewMessage() increments generation to 1
  3. Next stream partial arrivessendOrEditStreamMessage() detects generation changed → sends a new sendMessage() → Preview Login fails with 'WebSocket Error (socket hang up)' ECONNRESET #2 (M2) created — duplicate now visible
  4. Final deliverydeliverLaneText() calls consumeArchivedAnswerPreviewForFinal() → tries to editMessageText on M1 → fails (orphaned) → falls back to sendPayload() (yet another message)
  5. CleanupdeleteMessage(M1) — user sees the "correction"

The core issue is forceNewMessage() being called in onAssistantMessageStart even when the response is part of the same logical reply. This orphans the current preview and forces creation of a new one, leaving two messages visible until cleanup.

Workaround

Setting "streaming": "off" on all Telegram accounts eliminates the issue entirely. This bypasses the preview/edit pipeline.

"telegram": {
  "streaming": "off"
}

Suggested Fix

onAssistantMessageStart should not call forceNewMessage() if the current preview message is still the active one for the same reply context. The generation counter should only increment when a genuinely new message thread is needed, not on every assistant segment boundary.

Alternatively, deliverLaneText() should check the active stream preview (M2) before falling back to sendPayload(), instead of only trying the archived preview (M1).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions