Skip to content

feat(mattermost): block streaming edit-in-place (rebased)#33506

Open
teconomix wants to merge 27 commits intoopenclaw:mainfrom
teconomix:feat/mattermost-block-streaming-rebased
Open

feat(mattermost): block streaming edit-in-place (rebased)#33506
teconomix wants to merge 27 commits intoopenclaw:mainfrom
teconomix:feat/mattermost-block-streaming-rebased

Conversation

@teconomix
Copy link
Copy Markdown
Contributor

Rebase/continuation of Luna2026-a11y’s work for:

This PR cherry-picks and resolves conflicts against current upstream main.

Notes:

  • Conflict resolution updated to match current ReplyDispatcher deliver signature: deliver(payload, info:{kind}).
  • Mattermost extension unit tests pass locally via pnpm test:extensions (overall suite currently has unrelated failures in other extensions on latest main).

Refs:

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 3, 2026

Greptile Summary

This PR implements edit-in-place streaming for the Mattermost extension: while a reply is being generated, the bot progressively edits a single Mattermost post in-place (via patchMattermostPost) at a 200 ms throttle, then finalises it with the authoritative text when the model finishes. It's a rebase/cherry-pick of earlier work (PRs #25295 and #25635) with conflicts resolved against current upstream.

Key changes:

  • client.ts: adds patchMattermostPost (PUT /posts/{id}/patch) and deleteMattermostPost (DELETE /posts/{id}), both correctly using client.request (and therefore the configured fetchImpl/SSRF guard); also propagates AbortSignal through createMattermostPost.
  • send.ts: threads the new signal option through to createMattermostPost.
  • monitor.ts: full streaming state machine — throttled setInterval, bounded inflight tracking (patchInflight / patchAbortController), 8 s flush timeout, previewSendFailed latch to prevent retry storms, and comprehensive orphan-cleanup across every exit path (in-place patch success, media fallback, divergent-thread case, media-only final payload, and onSettled).

Issues addressed from prior review rounds: global-fetch bypass, missing deleteMattermostPost import, streaming state not reset on success path, lastSentText race allowing the final delivery to race the last in-flight tick, and unbounded spin-wait.

Remaining finding:

  • fetchMattermostUserTeams is imported in monitor.ts but never referenced there (it lives in send.ts / monitor-slash.ts where it's actually used).

Confidence Score: 5/5

  • Safe to merge — all previously flagged blocking issues have been resolved; the only remaining finding is a P2 style issue (unused import).
  • Every P0/P1 concern from the prior review threads has been addressed in this revision: patchMattermostPost/deleteMattermostPost now use client.request, deleteMattermostPost is imported, streaming state is fully reset on all exit paths, the lastSentText race is fixed, and the inflight/bounded-timeout machinery prevents both orphan posts and indefinite blocking. The only outstanding finding is a trivially removed unused import (fetchMattermostUserTeams in monitor.ts), which does not affect correctness or runtime behaviour.
  • No files require special attention for merge safety.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 8

Comment:
**Unused import `fetchMattermostUserTeams`**

`fetchMattermostUserTeams` is added to the import block here but is never referenced anywhere in `monitor.ts`. It's already consumed by `send.ts` and `monitor-slash.ts` where it's actually needed. This stray import will produce a lint/TypeScript warning.

```suggestion
  fetchMattermostChannel,
  fetchMattermostMe,
  fetchMattermostUser,
  deleteMattermostPost,
  normalizeMattermostBaseUrl,
  patchMattermostPost,
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (11): Last reviewed commit: "fix(mattermost): skip whitespace-only pa..." | Re-trigger Greptile

formatPairingApproveHint,
migrateBaseNameToDefaultAccount,
normalizeAccountId,
registerPluginHttpRoute,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused import registerPluginHttpRoute

registerPluginHttpRoute is imported but never referenced anywhere in this file. This will likely produce a lint/TypeScript warning and should be removed.

Suggested change
registerPluginHttpRoute,
resolveAllowlistProviderRuntimeGroupPolicy,
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/channel.ts
Line: 9

Comment:
**Unused import `registerPluginHttpRoute`**

`registerPluginHttpRoute` is imported but never referenced anywhere in this file. This will likely produce a lint/TypeScript warning and should be removed.

```suggestion
  resolveAllowlistProviderRuntimeGroupPolicy,
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +25 to +30
import { normalizeMattermostBaseUrl } from "./mattermost/client.js";
import {
createMattermostClient,
patchMattermostPost,
deleteMattermostPost,
} from "./mattermost/client.js";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate import from the same module

normalizeMattermostBaseUrl is imported from ./mattermost/client.js on its own (line 25), and then createMattermostClient, patchMattermostPost, and deleteMattermostPost are imported in a second block from the same module. These should be merged into a single import statement to avoid the redundancy.

Suggested change
import { normalizeMattermostBaseUrl } from "./mattermost/client.js";
import {
createMattermostClient,
patchMattermostPost,
deleteMattermostPost,
} from "./mattermost/client.js";
import {
createMattermostClient,
deleteMattermostPost,
normalizeMattermostBaseUrl,
patchMattermostPost,
} from "./mattermost/client.js";
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/channel.ts
Line: 25-30

Comment:
**Duplicate import from the same module**

`normalizeMattermostBaseUrl` is imported from `./mattermost/client.js` on its own (line 25), and then `createMattermostClient`, `patchMattermostPost`, and `deleteMattermostPost` are imported in a second block from the same module. These should be merged into a single import statement to avoid the redundancy.

```suggestion
import {
  createMattermostClient,
  deleteMattermostPost,
  normalizeMattermostBaseUrl,
  patchMattermostPost,
} from "./mattermost/client.js";
```

How can I resolve this? If you propose a fix, please make it concise.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 3, 2026

Additional Comments (1)

extensions/mattermost/src/channel.ts
edit/delete gated behind reactions config

listActions only returns ["react", "edit", "delete"] when hasReactionCapableAccount is true — a condition that checks both that a botToken/baseUrl are present AND that reactions is not false. This means that if a user sets reactions: false in their config (but has valid credentials), the edit and delete actions will also be hidden, even though handleAction never checks the reactions gate for those two actions. The gate for edit and delete should only require an enabled account with a configured botToken and baseUrl, independent of the reactions flag.

listActions: ({ cfg }) => {
  const actionsConfig = cfg.channels?.mattermost?.actions as { reactions?: boolean } | undefined;
  const baseReactions = actionsConfig?.reactions;
  const enabledAccounts = listMattermostAccountIds(cfg)
    .map((accountId) => resolveMattermostAccount({ cfg, accountId }))
    .filter((account) => account.enabled)
    .filter((account) => Boolean(account.botToken?.trim() && account.baseUrl?.trim()));

  const hasReactionCapableAccount = enabledAccounts.some((account) => {
    const accountActions = account.config.actions as { reactions?: boolean } | undefined;
    return (accountActions?.reactions ?? baseReactions ?? true) !== false;
  });

  const hasEditDeleteCapableAccount = enabledAccounts.length > 0;

  const actions: ChannelMessageActionName[] = [];
  if (hasReactionCapableAccount) actions.push("react");
  if (hasEditDeleteCapableAccount) actions.push("edit", "delete");
  return actions;
},
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/channel.ts
Line: 40-57

Comment:
**`edit`/`delete` gated behind reactions config**

`listActions` only returns `["react", "edit", "delete"]` when `hasReactionCapableAccount` is true — a condition that checks both that a botToken/baseUrl are present AND that `reactions` is not `false`. This means that if a user sets `reactions: false` in their config (but has valid credentials), the `edit` and `delete` actions will also be hidden, even though `handleAction` never checks the reactions gate for those two actions. The gate for `edit` and `delete` should only require an enabled account with a configured botToken and baseUrl, independent of the `reactions` flag.

```ts
listActions: ({ cfg }) => {
  const actionsConfig = cfg.channels?.mattermost?.actions as { reactions?: boolean } | undefined;
  const baseReactions = actionsConfig?.reactions;
  const enabledAccounts = listMattermostAccountIds(cfg)
    .map((accountId) => resolveMattermostAccount({ cfg, accountId }))
    .filter((account) => account.enabled)
    .filter((account) => Boolean(account.botToken?.trim() && account.baseUrl?.trim()));

  const hasReactionCapableAccount = enabledAccounts.some((account) => {
    const accountActions = account.config.actions as { reactions?: boolean } | undefined;
    return (accountActions?.reactions ?? baseReactions ?? true) !== false;
  });

  const hasEditDeleteCapableAccount = enabledAccounts.length > 0;

  const actions: ChannelMessageActionName[] = [];
  if (hasReactionCapableAccount) actions.push("react");
  if (hasEditDeleteCapableAccount) actions.push("edit", "delete");
  return actions;
},
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: bed7951d52

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

}

return ["react"];
return ["react", "edit", "delete"];
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Expose edit/delete even when reactions are disabled

This return now advertises edit/delete, but listActions still gates on hasReactionCapableAccount above, which becomes false when channels.mattermost.actions.reactions (or per-account reactions) is disabled; in that configuration the plugin returns no actions at all, so the new edit/delete capabilities never make it into action discovery/tool schemas even though supportsAction and handleAction implement them.

Useful? React with 👍 / 👎.

if ((isBlock || isFinal) && blockStreamingClient && mediaUrls.length === 0 && text) {
if (isBlock && blockStreamMessageId) {
// Subsequent block: accumulate and edit the existing message
blockStreamAccumulatedText += (blockStreamAccumulatedText ? "\n" : "") + text;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Append block-stream text without hardcoded newline joins

Using a fixed "\n" separator between streamed blocks alters reconstructed message content for multi-block replies (for example, sentence-chunked streams should join with spaces, and paragraph/code formatting can be changed), and because block streaming commonly drops final payloads after successful streaming, this altered accumulated text is what remains in the edited Mattermost post.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle bot added channel: mattermost Channel integration: mattermost size: M labels Mar 3, 2026
@teconomix
Copy link
Copy Markdown
Contributor Author

@greptile-apps review

Fixed the failing test in src/plugins/commands.test.ts — the getPluginCommandSpecs() function always includes acceptsArgs in its return type (defaulting to false), but the test expectation was missing that field. Updated to match the actual spec shape. All 6487 tests now pass (6489 total, 2 skipped as before).

Comment on lines +938 to +951
} catch (err) {
logVerboseMessage(
`mattermost block-stream edit failed: ${String(err)}, falling back to new message`,
);
blockStreamMessageId = null;
blockStreamAccumulatedText = "";
// Fall through to send as new message
const result = await sendMessageMattermost(to, text, {
accountId: account.accountId,
replyToId: threadRootId,
});
blockStreamMessageId = result.messageId;
blockStreamAccumulatedText = text;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fallback sends only current block, losing all accumulated text

When the patchMattermostPost call fails for a subsequent block, the code:

  1. Appends the current chunk to blockStreamAccumulatedText (+= text)
  2. Clears blockStreamAccumulatedText = ""
  3. Sends a new message with only text (the current block), not the full accumulated text

This means all content from blocks 2 through N-1 (which was stored in blockStreamAccumulatedText) is silently discarded. The user ends up with two separate messages: the original post (stuck at the last successfully-patched content) and a new message containing only the current block.

The fallback message should use the accumulated text (captured before clearing), so that at minimum the new message contains everything seen so far:

              } catch (err) {
                logVerboseMessage(
                  `mattermost block-stream edit failed: ${String(err)}, falling back to new message`,
                );
                const fullAccumulated = blockStreamAccumulatedText; // already includes += text
                blockStreamMessageId = null;
                blockStreamAccumulatedText = "";
                // Fall through to send as new message
                const result = await sendMessageMattermost(to, fullAccumulated, {
                  accountId: account.accountId,
                  replyToId: threadRootId,
                });
                blockStreamMessageId = result.messageId;
                blockStreamAccumulatedText = fullAccumulated;
              }
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 938-951

Comment:
**Fallback sends only current block, losing all accumulated text**

When the `patchMattermostPost` call fails for a subsequent block, the code:
1. Appends the current chunk to `blockStreamAccumulatedText` (`+= text`)
2. Clears `blockStreamAccumulatedText = ""`
3. Sends a new message with only `text` (the current block), not the full accumulated text

This means all content from blocks 2 through N-1 (which was stored in `blockStreamAccumulatedText`) is silently discarded. The user ends up with two separate messages: the original post (stuck at the last successfully-patched content) and a new message containing only the current block.

The fallback message should use the accumulated text (captured before clearing), so that at minimum the new message contains everything seen so far:

```typescript
              } catch (err) {
                logVerboseMessage(
                  `mattermost block-stream edit failed: ${String(err)}, falling back to new message`,
                );
                const fullAccumulated = blockStreamAccumulatedText; // already includes += text
                blockStreamMessageId = null;
                blockStreamAccumulatedText = "";
                // Fall through to send as new message
                const result = await sendMessageMattermost(to, fullAccumulated, {
                  accountId: account.accountId,
                  replyToId: threadRootId,
                });
                blockStreamMessageId = result.messageId;
                blockStreamAccumulatedText = fullAccumulated;
              }
```

How can I resolve this? If you propose a fix, please make it concise.

blockStreamMessageId = result.messageId;
blockStreamAccumulatedText = text;
}
runtime.log?.(`block-stream edited ${blockStreamMessageId}`);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Misleading log message after fallback send

After the catch block completes a fallback by sending a new message (not patching an existing one), this log still reads "block-stream edited …". The blockStreamMessageId at this point holds the ID of the newly-sent fallback message, so the log is factually wrong and will confuse diagnostics.

Suggested change
runtime.log?.(`block-stream edited ${blockStreamMessageId}`);
runtime.log?.(`block-stream fallback sent ${blockStreamMessageId}`);
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 952

Comment:
**Misleading log message after fallback send**

After the catch block completes a fallback by *sending a new message* (not patching an existing one), this log still reads `"block-stream edited …"`. The `blockStreamMessageId` at this point holds the ID of the newly-sent fallback message, so the log is factually wrong and will confuse diagnostics.

```suggestion
              runtime.log?.(`block-stream fallback sent ${blockStreamMessageId}`);
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a9ddaa96dd

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

if (isBlock && blockStreamMessageId) {
// Subsequent block: accumulate and edit the existing message
// Append exactly as we receive blocks; do not inject separators.
blockStreamAccumulatedText += text;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Strip per-block prefix before accumulating streamed text

When responsePrefix is enabled, the dispatcher prefixes each block payload before deliver runs, and this code appends each block verbatim into blockStreamAccumulatedText; for successful block-streaming runs, final payloads are commonly suppressed in buildReplyPayloads (shouldDropFinalPayloads), so there is no later overwrite to clean this up. The resulting edited Mattermost post can contain repeated prefixes in the middle of one message (for example, "[bot] ...[bot] ...") whenever a reply spans multiple blocks.

Useful? React with 👍 / 👎.

@teconomix teconomix marked this pull request as draft March 4, 2026 07:07
@teconomix teconomix force-pushed the feat/mattermost-block-streaming-rebased branch from a9ddaa9 to 7ca734f Compare March 6, 2026 17:44
@teconomix teconomix marked this pull request as ready for review March 7, 2026 07:43
Comment on lines +1271 to +1279
} catch (err) {
logVerboseMessage(
`mattermost block-stream final edit failed: ${String(err)}, sending new message`,
);
await sendMessageMattermost(to, finalText, {
accountId: account.accountId,
replyToId: threadRootId,
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orphaned partial message on final-patch failure

When patchMattermostPost fails in the isFinal branch, the fallback sends a brand-new message containing the complete final text — but the previously streamed message (whose ID was blockStreamMessageId) is left in the channel as an orphaned partial. Users end up seeing two messages: a dangling intermediate post and the new complete one.

Since you already have a reference to the orphaned post via the captured blockStreamMessageId, you can attempt to clean it up before (or after) sending the new fallback:

              } catch (err) {
                logVerboseMessage(
                  `mattermost block-stream final edit failed: ${String(err)}, sending new message`,
                );
                const orphanId = blockStreamMessageId;
                // Reset state before sending so the new message is tracked correctly.
                blockStreamMessageId = null;
                blockStreamAccumulatedText = "";
                // Best-effort delete of the orphaned partial stream message.
                try {
                  await deleteMattermostPost(blockStreamingClient, orphanId);
                } catch {
                  // Ignore — the delete is a courtesy cleanup.
                }
                await sendMessageMattermost(to, finalText, {
                  accountId: account.accountId,
                  replyToId: threadRootId,
                });
                return;
              }

Note that deleteMattermostPost is already imported and available in monitor.ts (via client.ts); it just needs to be added to the import list from that module.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 1271-1279

Comment:
**Orphaned partial message on final-patch failure**

When `patchMattermostPost` fails in the `isFinal` branch, the fallback sends a brand-new message containing the complete final text — but the previously streamed message (whose ID was `blockStreamMessageId`) is left in the channel as an orphaned partial. Users end up seeing two messages: a dangling intermediate post and the new complete one.

Since you already have a reference to the orphaned post via the captured `blockStreamMessageId`, you can attempt to clean it up before (or after) sending the new fallback:

```typescript
              } catch (err) {
                logVerboseMessage(
                  `mattermost block-stream final edit failed: ${String(err)}, sending new message`,
                );
                const orphanId = blockStreamMessageId;
                // Reset state before sending so the new message is tracked correctly.
                blockStreamMessageId = null;
                blockStreamAccumulatedText = "";
                // Best-effort delete of the orphaned partial stream message.
                try {
                  await deleteMattermostPost(blockStreamingClient, orphanId);
                } catch {
                  // Ignore — the delete is a courtesy cleanup.
                }
                await sendMessageMattermost(to, finalText, {
                  accountId: account.accountId,
                  replyToId: threadRootId,
                });
                return;
              }
```

Note that `deleteMattermostPost` is already imported and available in `monitor.ts` (via `client.ts`); it just needs to be added to the import list from that module.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7ca734ffe4

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1245 to +1246
blockStreamMessageId = null;
blockStreamAccumulatedText = "";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep stream state until fallback send succeeds

This clears blockStreamMessageId and blockStreamAccumulatedText before awaiting the fallback sendMessageMattermost call, so any send failure (for example transient API/network errors) discards all accumulated streamed text state. In block-stream mode the final payload is commonly dropped after successful streaming, so this can leave users with truncated or missing replies instead of a recoverable retry path.

Useful? React with 👍 / 👎.

Comment on lines +1253 to +1254
blockStreamMessageId = result.messageId;
blockStreamAccumulatedText = fallbackText;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Disable patch retries after first edit failure

After a patch failure, the fallback branch stores the new fallback post id back into blockStreamMessageId, so subsequent blocks immediately re-enter the patch path and repeat the same failure mode. When patching is persistently disallowed (for example missing edit_post permission), this emits a new full accumulated message on every block and floods the thread with duplicates.

Useful? React with 👍 / 👎.

@teconomix teconomix force-pushed the feat/mattermost-block-streaming-rebased branch from 7ca734f to b7bba6b Compare March 7, 2026 08:10
@teconomix
Copy link
Copy Markdown
Contributor Author

Update: Live Tests Completed ✅

After extensive live testing today, the streaming implementation has been significantly reworked based on what we learned. Here's a summary of what changed and why.

Original approach: delta accumulation (abandoned)

The initial implementation accumulated block streaming deltas in deliver(kind="block") and patched Mattermost with the growing accumulated text. This turned out to be fundamentally broken:

  • The EmbeddedBlockChunker strips whitespace at break points (\n\n, sentence space, \n) when emitting chunks
  • Reconstructing the original text from deltas requires knowing which type of break was used — paragraph, newline, or sentence — which is not available at delivery time
  • Any joiner heuristic either corrupts markdown (spurious blank lines between every chunk) or leaves words joined without spacing (undadministrativer)
  • Headings (## Title) lose their preceding \n\n and fail to render as headings

New approach: onPartialReply with full text (same as Discord draft streaming)

The root fix: use onPartialReply instead of block streaming. onPartialReply provides the complete accumulated text at every step — no reconstruction needed, no whitespace guessing.

This is exactly how Discord's edit-in-place draft streaming works (updateDraftFromPartial in message-handler.process.ts). We now follow the same pattern:

onPartialReply(payload) → schedulePatch(payload.text)  // full text, throttled
deliver(kind="final")  → flushPendingPatch() + final patch with authoritative text

A STREAM_PATCH_THROTTLE_MS = 200 leading-edge throttle batches the frequent onPartialReply events into one Mattermost edit per 200ms, staying well within rate limits (~5 edits/s vs. Mattermost's ~10 req/s limit). Block streaming is explicitly disabled (disableBlockStreaming: true) when a streaming client is available.

What we validated in live testing

  • ✅ First content appears within ~200ms of model starting to generate
  • ✅ Message edits in-place progressively — chunks appear piece by piece
  • ✅ Markdown formatting fully preserved (headings, bold, lists, code blocks)
  • ✅ No words joined without spacing at chunk boundaries
  • ✅ No spurious blank lines between paragraphs
  • ✅ Tested with 5-page essays (long responses with rich markdown)
  • ✅ No Mattermost rate limit errors observed
  • ✅ Final message always correct (isFinal patch with authoritative text)
  • ✅ Fallback path works: orphaned partial is cleaned up if final patch fails

Other fixes included in this PR (from previous review rounds)

  • Fallback send: state is cleared after successful send (not before), so content is never lost on send failure
  • patchingDisabled flag prevents retry-loop flooding when edit_post permission is missing
  • Orphan cleanup on isFinal patch failure via best-effort deleteMattermostPost
  • disableBlockStreaming now passes false instead of undefined when no explicit config
  • Removed blockStreamingCoalesceDefaults (dead code with the new approach)
  • Rebased on current origin/main (picks up fix(feishu): remove invalid timeout properties which resolves the CI TypeScript failures)

@greptile-apps please re-review with the new approach in mind. The core of the streaming logic is now in schedulePatch / flushPendingPatch (monitor.ts) rather than the deliver callback.

const orphanId = streamMessageId;
streamMessageId = null;
try {
await deleteMattermostPost(blockStreamingClient!, orphanId);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deleteMattermostPost used but never imported

deleteMattermostPost is called here but is absent from the ./client.js import block at the top of the file (only patchMattermostPost was added). This will fail TypeScript compilation and throw a ReferenceError at runtime whenever the patchMattermostPost catch-branch is hit.

The import on line 34 needs to include deleteMattermostPost:

Suggested change
await deleteMattermostPost(blockStreamingClient!, orphanId);
await deleteMattermostPost(blockStreamingClient!, orphanId);

And the import block in monitor.ts should be updated to:

import {
  createMattermostClient,
  deleteMattermostPost,
  fetchMattermostChannel,
  fetchMattermostMe,
  fetchMattermostUser,
  fetchMattermostUserTeams,
  normalizeMattermostBaseUrl,
  patchMattermostPost,
  sendMattermostTyping,
  type MattermostChannel,
  type MattermostPost,
  type MattermostUser,
} from "./client.js";
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 1332

Comment:
**`deleteMattermostPost` used but never imported**

`deleteMattermostPost` is called here but is absent from the `./client.js` import block at the top of the file (only `patchMattermostPost` was added). This will fail TypeScript compilation and throw a `ReferenceError` at runtime whenever the `patchMattermostPost` catch-branch is hit.

The import on line 34 needs to include `deleteMattermostPost`:

```suggestion
                await deleteMattermostPost(blockStreamingClient!, orphanId);
```

And the import block in `monitor.ts` should be updated to:
```typescript
import {
  createMattermostClient,
  deleteMattermostPost,
  fetchMattermostChannel,
  fetchMattermostMe,
  fetchMattermostUser,
  fetchMattermostUserTeams,
  normalizeMattermostBaseUrl,
  patchMattermostPost,
  sendMattermostTyping,
  type MattermostChannel,
  type MattermostPost,
  type MattermostUser,
} from "./client.js";
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +1238 to +1244
if (!streamMessageId) {
const result = await sendMessageMattermost(to, text, {
accountId: account.accountId,
replyToId: threadRootId,
});
streamMessageId = result.messageId;
runtime.log?.(`stream-patch started ${streamMessageId}`);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for first-message send in flushPendingPatch

When streamMessageId is null (the first partial-reply message has not yet been sent), sendMessageMattermost is awaited without a try/catch. If this call throws, the error propagates out of flushPendingPatch and then out of the deliver function, which could prevent the final reply from reaching the user. The else branch directly below wraps its patch call in a try/catch — this branch should do the same to stay consistent and resilient:

Suggested change
if (!streamMessageId) {
const result = await sendMessageMattermost(to, text, {
accountId: account.accountId,
replyToId: threadRootId,
});
streamMessageId = result.messageId;
runtime.log?.(`stream-patch started ${streamMessageId}`);
try {
const result = await sendMessageMattermost(to, text, {
accountId: account.accountId,
replyToId: threadRootId,
});
streamMessageId = result.messageId;
runtime.log?.(`stream-patch started ${streamMessageId}`);
} catch (err) {
logVerboseMessage(`mattermost stream-patch flush send failed: ${String(err)}`);
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 1238-1244

Comment:
**Missing error handling for first-message send in `flushPendingPatch`**

When `streamMessageId` is `null` (the first partial-reply message has not yet been sent), `sendMessageMattermost` is awaited without a `try/catch`. If this call throws, the error propagates out of `flushPendingPatch` and then out of the `deliver` function, which could prevent the final reply from reaching the user. The `else` branch directly below wraps its patch call in a `try/catch` — this branch should do the same to stay consistent and resilient:

```suggestion
        try {
          const result = await sendMessageMattermost(to, text, {
            accountId: account.accountId,
            replyToId: threadRootId,
          });
          streamMessageId = result.messageId;
          runtime.log?.(`stream-patch started ${streamMessageId}`);
        } catch (err) {
          logVerboseMessage(`mattermost stream-patch flush send failed: ${String(err)}`);
        }
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a93badf0b4

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1401 to +1402
onPartialReply: blockStreamingClient
? (payload: import("../../auto-reply/types.js").ReplyPayload) => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor blockStreaming=false before wiring partial patch updates

The new streaming path enables onPartialReply solely when blockStreamingClient exists, but that client is present for every runnable Mattermost account, so this ignores channels.mattermost(...).blockStreaming=false. As a result, users who explicitly disable block streaming still get incremental edit-in-place updates, which regresses a documented per-account toggle and removes any opt-out path for this behavior.

Useful? React with 👍 / 👎.

Comment on lines +1269 to +1271
patchTimer = setTimeout(async () => {
patchTimer = null;
const text = pendingPatchText;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep patch callback marked pending until network write completes

The scheduled patch callback clears patchTimer before awaiting the send/patch request, so final delivery can run while that async update is still in flight. In that timing window, flushPendingPatch sees no timer and skips waiting, which can lead to two independent writes (final + stale partial) and produce duplicate posts or a final message being overwritten by an older partial chunk.

Useful? React with 👍 / 👎.

@teconomix
Copy link
Copy Markdown
Contributor Author

Follow-up fixes from review (commit 55dcf492d)

Addressed the review findings that apply to the current onPartialReply-based implementation:

  • blockStreaming: false not honored → added streamingEnabled = account.blockStreaming !== false guard; when explicitly disabled, blockStreamingClient is null and the full streaming path is bypassed
  • Race condition: patchTimer cleared before await → timer is now cleared only after the async network call completes, so flushPendingPatch() correctly detects in-flight patches
  • Missing try/catch in flushPendingPatch first-send → wrapped in try/catch; errors are logged but no longer propagate out and abort the final delivery

Note: Several review comments referenced the older delta-accumulation approach (block accumulation, fallback send losing accumulated text, patchingDisabled retry loop, etc.) — those code paths no longer exist since the onPartialReply rewrite in d875cc8ab. The greptile comment about registerPluginHttpRoute being unused in channel.ts appears to be a false positive — it is used in monitor.ts which is the actual runtime file.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 55dcf492df

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1330 to +1334
if (isFinal && streamMessageId && text) {
try {
await patchMattermostPost(blockStreamingClient!, {
postId: streamMessageId,
message: text,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Send final media payloads after stream-text patch

In monitorMattermostProvider's final-delivery branch, any final payload with both text and mediaUrls takes the isFinal && streamMessageId && text path, patches text in place, and returns before reaching the media send loop below. When partial streaming has already created streamMessageId (the normal case for streamed replies), attachments in the final payload are silently dropped and users only receive the edited text message.

Useful? React with 👍 / 👎.

@teconomix
Copy link
Copy Markdown
Contributor Author

CI fixes (commit 5e1382183)

check job (TS errors):

  • Added deleteMattermostPost to the client.js import block (was called in the isFinal fallback path but not imported → TS2304)
  • Replaced inline import("../../auto-reply/types.js").ReplyPayload with the already-imported ReplyPayload type in the onPartialReply callback → TS2307

secrets job: The flagged locations (src/secrets/apply.test.ts:75,106,360,397,450) are not in this PR — they appear to be a pre-existing issue in the base branch unrelated to our changes.

Remaining review comments: The open review comments (orphan cleanup, accumulated text loss, patchingDisabled) all reference the old delta block-streaming implementation which was removed in d875cc8ab. The current approach uses onPartialReply (full cumulative text) with a throttled patch, so none of those code paths exist anymore.

@teconomix
Copy link
Copy Markdown
Contributor Author

CI status — all 3 remaining failures are pre-existing upstream issues

None of the failing checks touch files modified in this PR (git diff fork/main...HEAD --name-only confirms this):

Job Error File In this PR?
check (tsgo) TS2883 — inferred type cannot be named without reference to Procedure from vitest/spy extensions/feishu/src/monitor.test-mocks.ts:3 ❌ No
pnpm test:extensions vi.mock factory top-level variable error extensions/feishu/src/monitor.startup.test.ts, extensions/zalouser/src/monitor.*.test.ts ❌ No
secrets Potential secrets detected src/secrets/apply.test.ts:75,106,360,397,450 ❌ No

These failures appear to be present on the base branch independently of this PR. Our changes are limited to extensions/mattermost/src/mattermost/monitor.ts and extensions/mattermost/src/channel.ts.

@teconomix
Copy link
Copy Markdown
Contributor Author

✅ Production-tested — streaming works end-to-end

What was tested

Tested on a live Mattermost instance (mattermost.teconomix.net) against real LLM responses (Sonnet 4.5). Both DM channels and group channels were verified.

Observed behaviour:

  • First partial post appears within ~200ms of generation start
  • Post edits in-place as model generates, ~200–300ms cadence
  • Final authoritative text replaces partial content cleanly
  • No duplicate messages, no orphaned partial posts
  • TUI and Mattermost stream in sync; TUI not slowed by patch (non-blocking onPartialReply)

Bug found + fixed during prod testing: client.ts missing functions

The patch script initially only modified monitor.ts, but patchMattermostPost and deleteMattermostPost are added by this PR to client.ts. On a fresh npm-installed OpenClaw these functions exist; but the discovery highlighted that any downstream patch workflow must include client.ts.

Bug found + fixed: account.baseUrl empty in common configs

When baseUrl is set at the top level of the Mattermost channel config (not per-account), account.baseUrl is empty → blockStreamingClient was always null → no streaming. Fixed in commit 6c3991dc0: use the outer-scope baseUrl/botToken (already resolved via opts.baseUrl ?? account.baseUrl) instead of re-reading from account.

Design change: setTimeoutsetInterval (commit 6c3991dc0)

The original leading-edge setTimeout approach had a race: while a network call was in flight, new schedulePatch invocations saw the timer as "active" and skipped scheduling, resulting in only one intermediate update per response.

Replaced with setInterval + patchSending mutex:

  • Interval fires every 200ms unconditionally
  • patchSending flag prevents concurrent network calls
  • lastSentText guard skips ticks when no new text arrived
  • stopPatchInterval() for clean teardown

This gives continuous progressive updates throughout generation, not just leading + final.

Latest commit: 6c3991dc0

@greptile-apps please re-review with the above context. All CI failures remain pre-existing upstream issues unrelated to this PR (Feishu test mocks, Zalouser tests, secrets test).

Comment on lines +1280 to +1311
if (!text || text === lastSentText || patchSending) return;
lastSentText = text;
patchSending = true;
void (async () => {
try {
if (!streamMessageId) {
try {
const result = await sendMessageMattermost(to, text, {
accountId: account.accountId,
replyToId: threadRootId,
});
streamMessageId = result.messageId;
runtime.log?.(`stream-patch started ${streamMessageId}`);
} catch (err) {
logVerboseMessage(`mattermost stream-patch send failed: ${String(err)}`);
}
} else {
try {
await patchMattermostPost(blockStreamingClient, {
postId: streamMessageId,
message: text,
});
runtime.log?.(`stream-patch edited ${streamMessageId}`);
} catch (err) {
logVerboseMessage(`mattermost stream-patch edit failed: ${String(err)}`);
}
}
} finally {
patchSending = false;
}
})();
}, STREAM_PATCH_INTERVAL_MS);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In-flight interval patch can overwrite the final authoritative message

The setInterval callback uses a fire-and-forget (void (async () => {...})()) pattern. The lastSentText guard is set synchronously before the network call starts:

lastSentText = text;   // set synchronously
patchSending = true;
void (async () => {
  // in-flight patchMattermostPost — still running
})();

When deliver is called with isFinal immediately after a tick fires, flushPendingPatch calls stopPatchInterval() but then returns early because text === lastSentText. The final patchMattermostPost is then issued concurrently with the still-in-flight tick's patch. If the Mattermost server processes the partial-text patch after the final-text patch, the post ends up permanently showing the truncated partial content instead of the complete answer.

To fix this reliably, replace the patchSending: boolean flag with a tracked promise so flushPendingPatch can await the in-flight call before issuing the authoritative final patch:

// Replace:
let patchSending = false;

// With:
let patchInFlight: Promise<void> | null = null;

Then in the interval tick, assign and await the promise; and in flushPendingPatch, await patchInFlight before proceeding. This ensures the final patch always lands last.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 1280-1311

Comment:
**In-flight interval patch can overwrite the final authoritative message**

The `setInterval` callback uses a fire-and-forget (`void (async () => {...})()`) pattern. The `lastSentText` guard is set synchronously before the network call starts:

```typescript
lastSentText = text;   // set synchronously
patchSending = true;
void (async () => {
  // in-flight patchMattermostPost — still running
})();
```

When `deliver` is called with `isFinal` immediately after a tick fires, `flushPendingPatch` calls `stopPatchInterval()` but then returns early because `text === lastSentText`. The final `patchMattermostPost` is then issued concurrently with the still-in-flight tick's patch. If the Mattermost server processes the partial-text patch *after* the final-text patch, the post ends up permanently showing the truncated partial content instead of the complete answer.

To fix this reliably, replace the `patchSending: boolean` flag with a tracked promise so `flushPendingPatch` can await the in-flight call before issuing the authoritative final patch:

```typescript
// Replace:
let patchSending = false;

// With:
let patchInFlight: Promise<void> | null = null;
```

Then in the interval tick, assign and await the promise; and in `flushPendingPatch`, `await patchInFlight` before proceeding. This ensures the final patch always lands last.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 3 to 5
"channels": ["mattermost"],
"configSchema": {
"type": "object",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Schema now accepts any configuration key without validation

Switching from "additionalProperties": false, "properties": {} to "additionalProperties": true removes all schema enforcement for the plugin config. Typos in config keys (e.g. blockstreaming instead of blockStreaming) will silently pass validation and be ignored at runtime instead of producing a useful error.

Consider explicitly enumerating the known properties (including the new blockStreaming boolean) rather than allowing everything:

{
  "type": "object",
  "additionalProperties": false,
  "properties": {
    "blockStreaming": { "type": "boolean" }
  }
}

This keeps validation strict while accommodating the new option.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/openclaw.plugin.json
Line: 3-5

Comment:
**Schema now accepts any configuration key without validation**

Switching from `"additionalProperties": false, "properties": {}` to `"additionalProperties": true` removes all schema enforcement for the plugin config. Typos in config keys (e.g. `blockstreaming` instead of `blockStreaming`) will silently pass validation and be ignored at runtime instead of producing a useful error.

Consider explicitly enumerating the known properties (including the new `blockStreaming` boolean) rather than allowing everything:

```json
{
  "type": "object",
  "additionalProperties": false,
  "properties": {
    "blockStreaming": { "type": "boolean" }
  }
}
```

This keeps validation strict while accommodating the new option.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6c3991dc05

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1360 to +1362
if (isFinal) {
stopPatchInterval();
streamMessageId = null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Clear patch timer when dispatch ends without a final

schedulePatch creates a repeating setInterval, but timer cleanup is only executed inside the isFinal branch here; if a run emits partial text and then fails/aborts before any final payload, this branch never runs and the interval is left alive indefinitely. In long-running gateways, repeated mid-stream failures will accumulate active timers and stale closures, causing avoidable background wakeups and memory growth until process restart.

Useful? React with 👍 / 👎.

@teconomix teconomix force-pushed the feat/mattermost-block-streaming-rebased branch from 6c3991d to c0c1947 Compare March 7, 2026 19:39
@teconomix
Copy link
Copy Markdown
Contributor Author

Review fixes — commit c0c1947ee

Rebased onto current main (160 upstream commits) and addressed all outstanding review findings:

Greptile: In-flight interval patch can overwrite final text

Added while (patchSending) { await new Promise(r => setTimeout(r, 20)); } after flushPendingPatch() in the isFinal branch. This ensures any in-flight setInterval tick completes before the final authoritative patch is issued to Mattermost.

Codex: Interval not cleaned up on dispatch abort

Added idle-tick counter to setInterval: after 25 consecutive ticks with no new text (~5s), stopPatchInterval() is called automatically. This handles cases where the dispatch stream is aborted before isFinal fires.

Greptile: additionalProperties: true removes schema validation

Restored additionalProperties: false in openclaw.plugin.json and explicitly enumerated the new blockStreaming: boolean property.

Function rename: patchMattermostPostupdateMattermostPost

Upstream merged updateMattermostPost (from another PR) into client.ts. I removed my redundant patchMattermostPost function and updated all call sites in monitor.ts and channel.ts to use the upstream function.

CI status

All TS errors in check output are pre-existing upstream issues in src/agents/models-config.*.test.ts and ui/src/ui/ — not in any file touched by this PR (git diff origin/main...HEAD --name-only only shows extensions/mattermost/**).

@greptile-apps please re-review c0c1947ee.

Comment on lines +1752 to +1754
while (patchSending) {
await new Promise<void>((r) => setTimeout(r, 20));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unbounded spin-wait can hang indefinitely

The while (patchSending) loop has no timeout or maximum iteration count. patchSending is only reset to false inside the finally block of the fire-and-forget async in schedulePatch. If the in-flight updateMattermostPost or sendMessageMattermost network call never resolves (e.g., Mattermost is unreachable or the request stalls), this loop will spin in 20 ms increments forever — permanently blocking the deliver function, which prevents the final reply from ever reaching the user.

Consider adding a reasonable timeout (e.g. 5–10 seconds) before breaking out:

          const spinDeadline = Date.now() + 10_000; // 10 s max
          while (patchSending && Date.now() < spinDeadline) {
            await new Promise<void>((r) => setTimeout(r, 20));
          }
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 1752-1754

Comment:
**Unbounded spin-wait can hang indefinitely**

The `while (patchSending)` loop has no timeout or maximum iteration count. `patchSending` is only reset to `false` inside the `finally` block of the fire-and-forget async in `schedulePatch`. If the in-flight `updateMattermostPost` or `sendMessageMattermost` network call never resolves (e.g., Mattermost is unreachable or the request stalls), this loop will spin in 20 ms increments forever — permanently blocking the `deliver` function, which prevents the final reply from ever reaching the user.

Consider adding a reasonable timeout (e.g. 5–10 seconds) before breaking out:

```typescript
          const spinDeadline = Date.now() + 10_000; // 10 s max
          while (patchSending && Date.now() < spinDeadline) {
            await new Promise<void>((r) => setTimeout(r, 20));
          }
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +1759 to +1783
if (isFinal && streamMessageId && text) {
try {
await updateMattermostPost(blockStreamingClient!, streamMessageId, { message: text });
runtime.log?.(`stream-patch final edit ${streamMessageId}`);
} catch (err) {
logVerboseMessage(
`mattermost stream-patch final edit failed: ${String(err)}, sending new message`,
);
// Best-effort: delete the orphaned partial message before sending the complete one.
const orphanId = streamMessageId;
streamMessageId = null;
try {
await deleteMattermostPost(blockStreamingClient!, orphanId);
} catch {
// Ignore delete failure — delivering the complete message takes priority
}
await sendMessageMattermost(to, text, {
accountId: account.accountId,
replyToId: threadRootId,
});
return;
}
streamMessageId = null;
return;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Media URLs silently dropped when streaming is active

mediaUrls is populated on line 1743, but both exit paths inside the if (isFinal && streamMessageId && text) block — the success path (line 1782 return) and the fallback path (line 1779 return) — skip the media delivery loop entirely. If the final ReplyPayload carries media attachments (e.g. a generated image with a caption), they are silently discarded and never sent to the channel.

The fallback path already calls sendMessageMattermost for the text; it should also iterate over any mediaUrls afterward. The success path should similarly send any accompanying media after the final patch succeeds:

          if (isFinal && streamMessageId && text) {
            try {
              await updateMattermostPost(blockStreamingClient!, streamMessageId, { message: text });
              runtime.log?.(`stream-patch final edit ${streamMessageId}`);
            } catch (err) {
              // ... orphan cleanup + fallback text send ...
              // then fall through to send media below
            }
            streamMessageId = null;
            // fall through to send any media attachments
          }

          // Then send media (runs whether streaming text was used or not):
          for (const mediaUrl of mediaUrls) { ... }
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/mattermost/src/mattermost/monitor.ts
Line: 1759-1783

Comment:
**Media URLs silently dropped when streaming is active**

`mediaUrls` is populated on line 1743, but both exit paths inside the `if (isFinal && streamMessageId && text)` block — the success path (line 1782 `return`) and the fallback path (line 1779 `return`) — skip the media delivery loop entirely. If the final `ReplyPayload` carries media attachments (e.g. a generated image with a caption), they are silently discarded and never sent to the channel.

The fallback path already calls `sendMessageMattermost` for the text; it should also iterate over any `mediaUrls` afterward. The success path should similarly send any accompanying media after the final patch succeeds:

```typescript
          if (isFinal && streamMessageId && text) {
            try {
              await updateMattermostPost(blockStreamingClient!, streamMessageId, { message: text });
              runtime.log?.(`stream-patch final edit ${streamMessageId}`);
            } catch (err) {
              // ... orphan cleanup + fallback text send ...
              // then fall through to send media below
            }
            streamMessageId = null;
            // fall through to send any media attachments
          }

          // Then send media (runs whether streaming text was used or not):
          for (const mediaUrl of mediaUrls) { ... }
```

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c0c1947ee8

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1759 to +1761
if (isFinal && streamMessageId && text) {
try {
await updateMattermostPost(blockStreamingClient!, streamMessageId, { message: text });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve text chunking when finalizing streamed replies

When isFinal && streamMessageId && text is true, this path edits the post with the full text and returns before reaching the normal chunkMarkdownTextWithMode flow below, so streamed finals ignore textChunkLimit. For long outputs, this can exceed server or configured limits; the catch fallback then calls sendMessageMattermost(to, text, ...) with the same unchunked body, so the final delivery can fail after the partial stream message was already removed.

Useful? React with 👍 / 👎.

@teconomix
Copy link
Copy Markdown
Contributor Author

CI failure analysis — all failures are pre-existing in main

I verified each failing job against origin/main directly. None of the failures are in files touched by this PR (git diff origin/main...HEAD --name-only only shows extensions/mattermost/**).

check (TypeScript)

Failing lines are in src/agents/models-config.*.test.ts and ui/src/ui/app-render.ts. Both files exist unchanged in main and produce the same errors:

src/agents/models-config.fills-missing-provider-apikey-from-env-var.test.ts(415): TS2741 — Property 'baseUrl' is missing
ui/src/ui/app-render.ts(1032): TS2339 — Property 'chatStreamSegments' does not exist

Verified: git show origin/main:ui/src/ui/app-render.ts | sed -n '1030,1034p' shows the same chatStreamSegments reference.

secrets (detect-secrets)

Flagged lines are in src/agents/models-config.providers.normalize-keys.test.ts:80 (sk-test-secret-value-12345) and src/agents/models-config.fills-missing-provider-apikey-from-env-var.test.ts:237,416 (sk-plaintext-should-not-appear). Both strings exist verbatim in main.

checks-windows (3 shards)

All failures are in src/plugins/install.test.ts — a Windows path-normalisation assertion (RUNNER~1 vs runneradmin). Not present in our diff.

Summary

Job File(s) In our diff? In main?
check src/agents/models-config.*.test.ts, ui/src/ui/app-render.ts ✅ pre-existing
secrets src/agents/models-config.*.test.ts ✅ pre-existing
checks-windows src/plugins/install.test.ts ✅ pre-existing

This PR is ready for review on its merits. Happy to answer any questions about the streaming implementation.

teconomix and others added 25 commits March 31, 2026 06:53
…MattermostPost

Both functions called the global fetch directly, bypassing the fetchImpl
stored in the client closure. This silently ignored any custom fetch
implementation passed to createMattermostClient (test mocks, proxy-aware
fetchers, SSRF guards).

Switch both to client.request<void>() which uses fetchImpl, auto-injects
the Authorization header, handles Content-Type for JSON bodies, and
propagates errors consistently with every other client function.

uploadMattermostFile retains its direct fetch call (multipart/form-data
conflicts with request's automatic Content-Type injection).

Addresses Greptile review: 'New functions bypass client.request,
ignoring custom fetchImpl'.
…flight tick in flush

Race condition: lastSentText was set synchronously before the async send/patch
completed, so a failed request was treated as delivered and subsequent ticks
skipped retrying. flushPendingPatch also didn't wait for in-flight interval ticks,
causing it to exit early (text === lastSentText guard) when a tick had just fired
but hadn't resolved yet, leaving streamMessageId null and forcing final delivery
to send a new post instead of patching the streamed one.

Fixes:
- schedulePatch interval: set lastSentText only after successful send/patch
- flushPendingPatch: wait up to 2s for in-flight patchSending before proceeding
- flushPendingPatch: set lastSentText after network success, not before
…andle reply target divergence

Three fixes addressing Codex review feedback:

1. Streaming opt-in: change streamingEnabled from (blockStreaming !== false) to
   (blockStreaming === true) so accounts without an explicit blockStreaming setting
   preserve their agent blockStreamingDefault instead of having edit-in-place
   streaming silently enabled.

2. Text limit: apply textLimit truncation in schedulePatch and flushPendingPatch
   before sending/patching. Intermediate preview posts only need the first chunk;
   final delivery goes through deliverMattermostReplyPayload which applies full
   chunking. This prevents oversize patch loops when responses exceed the limit.

3. Reply target divergence: when the final payload carries an explicit replyToId
   that resolves to a different root than the streaming post was created under
   (e.g. a [[reply_to_current]] directive), skip the in-place patch and fall
   through to normal delivery so the reply lands in the correct thread. Any
   orphaned stream post is deleted before the correct reply is sent.
…y after fallback succeeds

Two fixes from latest Codex review:

1. Truncation dedup: compare lastSentText against the truncated text (not the full
   rawText) in both schedulePatch and flushPendingPatch. Previously, once a reply
   grew past textLimit the guard compared the growing rawText against the stored
   rawText, so the post would be patched every 200 ms with the same truncated body
   — running into avoidable Mattermost rate limiting on long responses.

2. Orphan cleanup order: in the final-edit fallback path, deliver the replacement
   message first and only delete the orphaned stream post afterward. If the fallback
   send also fails, the user keeps the partial preview instead of losing all visible
   output.
…fore-delete for divergent thread; fix disableBlockStreaming undefined

Three fixes from latest Codex review:

1. Compute replyTargetDiverged before flushPendingPatch: previously the flush
   always ran first, potentially creating a preview post under effectiveReplyToId
   even when the final payload would land in a different thread. Now flush is
   skipped when the target diverges, avoiding a transient post in the wrong thread.

2. Divergent-thread cleanup order: when replyTargetDiverged, deliver the correct
   message first and delete the orphaned preview only afterward. This matches the
   same pattern as the fallback path — if delivery fails, the user keeps the
   partial preview rather than losing all visible output.

3. disableBlockStreaming: changed fallback from false to undefined so accounts
   without an explicit blockStreaming setting preserve the agent blockStreamingDefault
   instead of having block streaming forced on.
…eview

When the final payload has no text (media-only), the in-place text patch branch
is skipped, but the streamed preview post was previously left in the channel
alongside the attachment. The orphanedStreamId capture now always holds the
stream post ID (not only on divergent-target paths), and a new branch delivers
the media payload first and deletes the stale preview only after delivery
succeeds. If delivery fails, the preview stays visible as a fallback.
…ivery; stop retrying failed patches

Two fixes:

1. Divergent-target flush (Codex ID=2962544342): when replyTargetDiverged is true
   the flush is skipped to avoid creating a preview in the wrong thread, but the
   patch interval was not stopped and any in-flight first sendMessageMattermost
   was not awaited. If that send resolved after the divergent delivery returned,
   it created a stray preview post with no cleanup path. Fix: always stop the
   interval and wait for patchSending to settle (up to 2s) even on the divergent
   path, so streamMessageId is populated if the send resolves during this window
   and the orphan cleanup below can capture and delete it.

2. Patch-failure retry storm (Codex ID=2962544347): after a patchMattermostPost
   failure in the schedulePatch interval, streamMessageId remained set and every
   subsequent 200ms tick retried the same failing request, spamming the API until
   final delivery. Fix: call stopPatchInterval() in the catch block so retries
   stop immediately. The preview stays frozen at its last successful text; deliver()
   will patch or replace it when the final reply arrives.
…ives; stop retrying failed initial sends

Two fixes:

1. onSettled orphan cleanup (Codex ID=2963834802): add cleanup in the streaming
   dispatcher's onSettled callback for cases where the reply pipeline produces no
   final payload — e.g. messaging-tool sends suppressed by agent-runner-payloads.ts,
   or empty/heartbeat responses. Without this, onPartialReply could create a preview
   post that is never finalized or deleted. The cleanup mirrors the existing logic in
   openclaw#43041 (P5).

2. Initial-send retry storm (Codex ID=2963834806): call stopPatchInterval() in the
   sendMessageMattermost catch block, mirroring the existing fix for patchMattermostPost
   failures. Without this, a failed initial post attempt (missing post permission, DM
   creation failure, etc.) causes the 200ms interval to retry indefinitely for the rest
   of the response, flooding the API and gateway logs.
When the first preview POST is still in flight (patchSending=true, streamMessageId=null),
the previous onSettled check was skipped entirely — the POST would resolve after cleanup
and leave an orphaned preview post with no interval to clear it.

Fix: trigger cleanup when either streamMessageId is set OR patchSending is true.
Stop the interval immediately, clear pending state, then wait up to 3s for patchSending
to clear before capturing the final streamMessageId and deleting the post.
…led cleanup

Two fixes:

1. Failure latch (Codex ID=2964357928): add previewSendFailed boolean that is set true
   in both the initial-send and patch-edit catch blocks (alongside stopPatchInterval).
   schedulePatch() checks it before re-arming the interval, so subsequent onPartialReply
   calls during a run with a permanent failure (missing permission, DM-creation error)
   no longer recreate the timer and retry indefinitely.

2. patchInterval in onSettled guard (Codex ID=2964357925): onSettled now triggers
   cleanup when patchInterval is non-null, even if streamMessageId and patchSending
   are still falsy. This covers the window between schedulePatch arming the interval
   and the first 200ms tick flipping patchSending — if the run ends in that window
   (same-target messaging-tool sends, empty/heartbeat replies), the interval is now
   stopped and the pending state is cleared.
…tled cleanup

Add patchInflight Promise tracking to P4 (feat/mattermost-block-streaming-rebased),
mirroring the existing P5 approach. The onSettled cleanup previously used a 3-second
busy-wait on patchSending, which would race on slow Mattermost links: if the first
preview POST takes longer than 3s the cleanup exits early, patchSending is forced false,
and when the POST later resolves it creates an orphan post that is never deleted.

Fix: track the interval tick's async function as patchInflight. onSettled awaits it
directly so the cleanup always captures the final streamMessageId, regardless of how
long the POST takes. (Codex ID=2964616785)
…ch (ID=2965023940)

After a successful patchMattermostPost in the isFinal branch, the code returned
immediately without delivering any media attachments. deliverMattermostReplyPayload
is the only path that uploads/sends media, so caption+image/file/audio payloads
were silently dropping the attachment whenever streaming was active and the patch
succeeded.

Fix: after a successful patch, check whether the payload has mediaUrls/mediaUrl.
If so, call deliverMattermostReplyPayload with text=undefined to deliver only the
media through the normal attachment path.
…s (ID=2965096969)

Patching preview text then delivering media separately splits a captioned-file
reply into two posts: a text-only preview + captionless file attachment.

New logic in the isFinal branch:
- Text-only payload: patch in place as before (no change for common case)
- Media payload: skip the patch, reset state, deliver full payload via
  deliverMattermostReplyPayload (text+media together), then delete preview.
- Patch failure: same fallback as media payload — full delivery + delete.
…-target wait

flushPendingPatch and the divergent-target branch were still using a 2-second
busy-wait on patchSending, which has the same race as the onSettled wait that was
already fixed: patchSending clears in the finally block before the network request
actually settles. Both paths now await patchInflight directly. (ID=2965256849)
…638)

resolveMattermostReplyRootId always returns threadRootId when it is set, so
comparing finalReplyToId against effectiveReplyToId was always false when the
streaming preview was created in a thread. Explicit reply directives like
[[reply_to:...]] could therefore never trigger the divergent-target path.

Fix: compare payload.replyToId directly against effectiveReplyToId instead of
going through the resolver. The resolver is still used to compute finalReplyToId
for the actual delivery call.
…5488514)

[[reply_to_current]] sets payload.replyToId to the child-post id inside the thread,
not the thread root. The raw !== comparison triggered false divergence: child-post !=
thread-root, even though they both resolve to the same Mattermost thread.

Fix: compute baseReplyToId via resolveMattermostReplyRootId without the payload, then
compare finalReplyToId against baseReplyToId. Both paths normalize child-post ids to
the thread root before comparison, so [[reply_to_current]] and explicit child-post
targets no longer falsely trigger the divergent-target path.
When blockStreaming is enabled, the blockStreamingClient was created
without the allowPrivateNetwork flag. On self-hosted / LAN Mattermost
deployments that set channels.mattermost.accounts.<id>.allowPrivateNetwork,
all preview patch/delete calls were subject to the SSRF guard default
private-network block, causing streamed posts to fail to patch/cleanup.

Pass allowPrivateNetwork: account.config?.allowPrivateNetwork === true
to createMattermostClient() — consistent with every other client
creation in this file.
…livery

The two orphan-cleanup deleteMattermostPost calls in the divergent-target
and media-only final paths were awaited after delivery already succeeded.
Since blockStreamingClient has no built-in timeout, a stalled Mattermost
DELETE could wedge withReplyDispatcher indefinitely even though the user
had already received the final response. Convert both to void/catch fire-and-
forget to match the established pattern used elsewhere in the same function.

Fixes: chatgpt-codex-connector P2 review comment 2998886441
The onSettled cleanup awaited deleteMattermostPost() without a
timeout. On stalled Mattermost links the DELETE can hang indefinitely,
keeping the cleanup closure alive even though the run is complete and
the user already received the final response.

Switch to void (fire-and-forget) — consistent with the three other
deleteMattermostPost call sites in this file that already use void.
The flush and schedulePatch paths create AbortControllers for timeout
handling, but sendMessageMattermost and createMattermostPost did not
accept or forward the signal. Timed-out POST requests could still
succeed, creating orphan preview messages with no tracked
streamMessageId to clean up.

Thread signal through: createMattermostPost params, MattermostSendOpts,
sendMessageMattermost, and both call sites in the block streaming
monitor.
…iew sends

A partial that is only whitespace/newlines passes the non-empty check
but is rejected by sendMessageMattermost (which trims before posting),
causing the schedulePatch catch branch to latch previewSendFailed=true
and permanently disable streaming previews for the rest of the reply.

Check fullText.trim() instead of fullText before calling schedulePatch.
@teconomix teconomix force-pushed the feat/mattermost-block-streaming-rebased branch from 70cf91d to b1f8b73 Compare March 31, 2026 04:53
@teconomix
Copy link
Copy Markdown
Contributor Author

I'll be switching to Matrix soon. Mattermost is to limited for my needs. Cannot keep maintaining this patch much longer. Is there a way that we can merge it and #43041? Anything needed from my side?

@mukhtharcm

@teconomix teconomix closed this Apr 1, 2026
@teconomix
Copy link
Copy Markdown
Contributor Author

teconomix commented Apr 1, 2026

abandoning — we have migrated from Mattermost to Matrix and are no longer maintaining Mattermost patches. Thanks for the reviews. maybe someone wants to take it from here.

@teconomix teconomix reopened this Apr 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: mattermost Channel integration: mattermost size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants