Skip to content

fix(telegram): avoid starting streaming replies with only 1-2 words#19673

Closed
emanuelst wants to merge 1 commit intoopenclaw:mainfrom
emanuelst:codex/telegram-streaming-first-preview-fragment-fix
Closed

fix(telegram): avoid starting streaming replies with only 1-2 words#19673
emanuelst wants to merge 1 commit intoopenclaw:mainfrom
emanuelst:codex/telegram-streaming-first-preview-fragment-fix

Conversation

@emanuelst
Copy link
Copy Markdown
Contributor

@emanuelst emanuelst commented Feb 18, 2026

fix(telegram): avoid starting streaming replies with only 1-2 words

Summary

  • Problem: Telegram streaming could show incomplete text for short replies before the full text appeared.
  • Why it matters: For example when receiving "No problem" I first received "No" (appearing as if that was the full message) then a bit later the full message "No problem" appeared. This also made it feel a bit laggy (see for example [Bug]: Telegram responses delivered word-by-word with severe performance degradation in 2026.2.15 #18269 (comment))
  • What changed: When Telegram streaming finalizes before any preview message was sent, we now use the final text for the first preview instead of stale partial text. Added a regression test for this case.
  • What did NOT change (scope boundary): No throttle/cadence changes, no config changes, and no non-Telegram changes.

Change Type (select all)

  • Bug fix

Scope (select all touched areas)

  • Integrations

Linked Issue/PR

User-visible / Behavior Changes

List user-visible changes (including defaults/config).
Telegram streaming no longer shows an incomplete first preview fragment.

Security Impact (required)

  • New permissions/capabilities? (No)
  • Secrets/tokens handling changed? (No)
  • New/changed network calls? (No)
  • Command/tool execution surface changed? (No)
  • Data access scope changed? (No)
  • If any Yes, explain risk + mitigation: N/A

Repro + Verification

Environment

  • OS: macOS (dev), Linux/Raspberry Pi (manual validation)
  • Runtime/container: Node 22
  • Model/provider: N/A (behavior in Telegram draft streaming path)
  • Integration/channel (if any): Telegram
  • Relevant config (redacted): channels.telegram.streamMode: "partial"

Steps

  1. Send a Telegram message that yields a short streamed answer.
  2. Observe partial message appear (e.g. No of No problem).
  3. Compare before/after patch behavior

Expected

  • First visible message part should not be only e.g. one word

Actual

  • Before: streaming message may only contain one word of a two word message
  • After: first preview is no longer stale partial text

Evidence

Attach at least one:

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording
  • Perf numbers (if relevant)

Human Verification (required)

What you personally verified (not just CI), and how:

  • Verified scenarios: Tested on my openclaw install (before/after)
  • Edge cases checked: Longer messages still work and stream
  • What you did not verify: full repo-wide test

Compatibility / Migration

  • Backward compatible? (Yes)
  • Config/env changes? (No)
  • Migration needed? (No)
  • If yes, exact upgrade steps: N/A

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly: revert the commit
  • Files/config to restore:
    • bot-message-dispatch.ts
    • bot-message-dispatch.test.ts
  • Known bad symptoms reviewers should watch for: N/A

Risks and Mitigations

None

Notes

  • AI-assisted
  • Initial Prompt
We noticed that in Telegram with Message Streaming, 
sometimes only the first word appears and then it takes 
some time until the next parts appear. 

This is especially glaring with short responses like "no problem"
only "no" appears while streaming first, which seems like it's no. 
only after some time the full "no problem" appears. 

here someone mentions something similar https://github.com/openclaw/openclaw/issues/18269

"Another option is to make first sentence a bulk load/faster rate, 
from ux perspective if you have something to read you can tolerate
slower updates, annoying part is that now it start with 1-2 words and it feels slow."

--

can you investigate if:
this still exists in current main?
where it may come from?
and propose a minimal fix?

Greptile Summary

This PR fixes an issue where Telegram streaming would display incomplete first preview fragments (e.g., showing "No" before "No problem" appears). The fix adds logic to update the draft stream with the final text before calling stop() when streaming has occurred but no preview message has been sent yet. This ensures the first visible message contains the complete text rather than stale partial content.

  • Adds check before draftStream.stop() to prime the draft with final text when hasStreamedMessage is true but previewMessageId is undefined
  • Adds token parameter to two editMessageTelegram calls for proper account context
  • Includes regression test covering the "no" → "no problem" scenario
  • Import reordering (type imports moved to top) follows TypeScript style conventions

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • The fix is targeted and well-tested. It addresses a specific UX issue without changing the overall streaming architecture. The added logic is defensive (checks multiple conditions before updating), includes a comprehensive regression test, and the token parameters were already part of the function signature.
  • No files require special attention

Last reviewed commit: 79895b7

@obviyus
Copy link
Copy Markdown
Contributor

obviyus commented Feb 26, 2026

Superseded by #27449, which ports this fix intent onto current main (lane-delivery architecture) and includes regression coverage for the short-stale-partial first preview case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: telegram Channel integration: telegram size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants