Skip to content

fix(msteams): deliver all text blocks in multi-block replies#49587

Merged
BradGroux merged 1 commit intoopenclaw:mainfrom
sudie-codes:fix/msteams-multi-block-reply
Mar 23, 2026
Merged

fix(msteams): deliver all text blocks in multi-block replies#49587
BradGroux merged 1 commit intoopenclaw:mainfrom
sudie-codes:fix/msteams-multi-block-reply

Conversation

@sudie-codes
Copy link
Copy Markdown
Contributor

@sudie-codes sudie-codes commented Mar 18, 2026

Summary

Fixes #29379 — Multi-block agent replies (code + explanation + code) are silently truncated to only the first text block in MS Teams.

What & Why

Problem: When the agent replies with multiple text blocks (e.g., code block + explanation + code block), only the first block is delivered to the Teams chat. Users receive incomplete/truncated responses with no indication that content was lost.

Root cause: Each deliver() call in reply-dispatcher.ts opened an independent continueConversation() call via sendMSTeamsMessages(). Teams silently drops messages when multiple proactive sends arrive in rapid succession — users only ever saw the content from the first continueConversation() call.

Fix:

  • reply-dispatcher.ts: Changed deliver() to accumulate rendered messages into a pendingMessages[] buffer instead of sending immediately. Added flushPendingMessages() that drains the buffer in a single sendMSTeamsMessages() call. Wrapped markDispatchIdle to flush all pending messages before signaling idle.
  • monitor-handler/message-handler.ts: Updated onSettled callback to return markDispatchIdle() so the async flush is properly awaited.

Files changed: extensions/msteams/src/reply-dispatcher.ts, extensions/msteams/src/monitor-handler/message-handler.ts

Screenshots

N/A — This is a server-side message delivery fix. The change affects how outbound messages are batched before being sent via Bot Framework's continueConversation(). Verification requires sending a multi-block agent response through a running Teams bot. The fix is validated via unit test that verifies batching behavior.

Test Results

AI Disclosure

  • AI-assisted (Claude Code with team orchestration)
  • Fully tested — new regression test + all existing tests pass
  • I understand what the code does: buffers outbound message blocks and flushes them in a single `continueConversation()` call to prevent Teams from silently dropping rapid successive proactive sends

@openclaw-barnacle openclaw-barnacle bot added channel: msteams Channel integration: msteams size: L labels Mar 18, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 18, 2026

Greptile Summary

This PR fixes two related MSTeams reliability issues: (1) the primary fix batches all reply blocks from a turn into a single sendMSTeamsMessages() call, sharing one continueConversation() context so Teams can no longer silently drop blocks 2+; (2) it adds caching of the Graph-native chat ID (graphChatId) to StoredConversationReference so SharePoint uploads in personal DMs use a format the Graph API accepts.

Notable changes:

  • reply-dispatcher.ts: deliver() now accumulates into pendingMessages[]; the wrapped markDispatchIdle() flushes atomically and is correctly awaited via onSettled.
  • monitor.ts: module-level singleton guard prevents double-start / EADDRINUSE crash loops.
  • message-handler.ts: security hardening — empty sender allowlist no longer downgrades a configured route-level policy to "open".
  • send-context.ts / graph-upload.ts: new resolveGraphChatId helper with caching.

Issues found:

  • send-context.ts line 179: const store = createMSTeamsConversationStoreFs() is re-declared inside the if (resolved) block, shadowing the store already in scope at line 117 — reuse the existing instance.
  • graph-upload.ts: the two consecutive if-blocks checking chats.length both return the same expression and can be collapsed into one.
  • send-context.ts: a null result from resolveGraphChatId (resolution failed) is not persisted in the store, so each subsequent proactive send retries the Graph lookup rather than short-circuiting.

Confidence Score: 4/5

  • Safe to merge; core batching fix is correct and well-tested, issues found are minor code quality concerns.
  • The primary bug fix (single-continueConversation batching) is logically sound, has a direct regression test, and passes 238 existing tests. The singleton guard and onSettled promise-propagation are also correct. The three issues flagged are all style/minor-performance — none block correctness of the stated fix.
  • extensions/msteams/src/send-context.ts (shadowed variable + no null-result caching) and extensions/msteams/src/graph-upload.ts (redundant if-blocks).

Comments Outside Diff (1)

  1. extensions/msteams/src/graph-upload.ts, line 206-214 (link)

    P2 Redundant chats.length === 1 guard before chats.length > 0

    Both branches return the same expression (chats[0].id). The first guard (=== 1) is only ever reached when length is exactly 1, but the second guard (> 0) would cover that case too — so the first block does no additional filtering. The two can be collapsed into one, with an updated comment that covers both cases.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: extensions/msteams/src/graph-upload.ts
    Line: 206-214
    
    Comment:
    **Redundant `chats.length === 1` guard before `chats.length > 0`**
    
    Both branches return the same expression (`chats[0].id`). The first guard (`=== 1`) is only ever reached when length is exactly 1, but the second guard (`> 0`) would cover that case too — so the first block does no additional filtering. The two can be collapsed into one, with an updated comment that covers both cases.
    
    
    
    How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/msteams/src/send-context.ts
Line: 179-180

Comment:
**Shadowed `store` variable creates a duplicate FS instance**

A second `const store = createMSTeamsConversationStoreFs()` is declared inside the `if (resolved)` block, which shadows the `store` already in scope from line 117. Both calls create an equivalent backing store, so this works, but it creates an unnecessary second instance and may confuse future readers. Reuse the outer `store`.

```suggestion
        await store.upsert(conversationId, { ...ref, graphChatId: resolved });
```

(Remove the inner `const store = createMSTeamsConversationStoreFs();` line entirely and use the `store` already in scope.)

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/msteams/src/graph-upload.ts
Line: 206-214

Comment:
**Redundant `chats.length === 1` guard before `chats.length > 0`**

Both branches return the same expression (`chats[0].id`). The first guard (`=== 1`) is only ever reached when length is exactly 1, but the second guard (`> 0`) would cover that case too — so the first block does no additional filtering. The two can be collapsed into one, with an updated comment that covers both cases.

```suggestion
  // Return the best-available match: single filtered result or first of multiple
  // (best-effort for 1:1 chats with a known user).
  if (chats.length > 0 && chats[0]?.id) {
    return chats[0].id;
  }
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/msteams/src/send-context.ts
Line: 174-188

Comment:
**`null` resolution result is never cached — causes a redundant Graph API call on every subsequent send**

When `resolveGraphChatId` returns `null` (API down, 401, no chats found, etc.), `graphChatId` is set to `null` locally but the store is **not updated**. Because `ref.graphChatId` remains `undefined`, the `graphChatId === undefined` guard passes again on the next `resolveMSTeamsSendContext` call, firing another Graph request that will likely also fail.

Consider caching the failure too (using a sentinel like an empty string, or adding an explicit `graphChatIdResolved: boolean` flag) so that repeat sends don't each pay the Graph round-trip cost. At minimum this is worth a `log.warn` or an in-process soft-TTL so callers know resolution was already attempted.

If the retry-on-every-send behaviour is intentional (to recover from transient failures), a short comment explaining the decision would help.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "fix(msteams): batch ..."

Comment on lines +179 to +180
const store = createMSTeamsConversationStoreFs();
await store.upsert(conversationId, { ...ref, graphChatId: resolved });
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Shadowed store variable creates a duplicate FS instance

A second const store = createMSTeamsConversationStoreFs() is declared inside the if (resolved) block, which shadows the store already in scope from line 117. Both calls create an equivalent backing store, so this works, but it creates an unnecessary second instance and may confuse future readers. Reuse the outer store.

Suggested change
const store = createMSTeamsConversationStoreFs();
await store.upsert(conversationId, { ...ref, graphChatId: resolved });
await store.upsert(conversationId, { ...ref, graphChatId: resolved });

(Remove the inner const store = createMSTeamsConversationStoreFs(); line entirely and use the store already in scope.)

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/msteams/src/send-context.ts
Line: 179-180

Comment:
**Shadowed `store` variable creates a duplicate FS instance**

A second `const store = createMSTeamsConversationStoreFs()` is declared inside the `if (resolved)` block, which shadows the `store` already in scope from line 117. Both calls create an equivalent backing store, so this works, but it creates an unnecessary second instance and may confuse future readers. Reuse the outer `store`.

```suggestion
        await store.upsert(conversationId, { ...ref, graphChatId: resolved });
```

(Remove the inner `const store = createMSTeamsConversationStoreFs();` line entirely and use the `store` already in scope.)

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +174 to +188
});
graphChatId = resolved;

// Cache in the conversation store so subsequent sends skip the Graph lookup
if (resolved) {
const store = createMSTeamsConversationStoreFs();
await store.upsert(conversationId, { ...ref, graphChatId: resolved });
}
} catch (err) {
log.debug?.("failed to resolve Graph chat ID", {
conversationId,
error: String(err),
});
graphChatId = null;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 null resolution result is never cached — causes a redundant Graph API call on every subsequent send

When resolveGraphChatId returns null (API down, 401, no chats found, etc.), graphChatId is set to null locally but the store is not updated. Because ref.graphChatId remains undefined, the graphChatId === undefined guard passes again on the next resolveMSTeamsSendContext call, firing another Graph request that will likely also fail.

Consider caching the failure too (using a sentinel like an empty string, or adding an explicit graphChatIdResolved: boolean flag) so that repeat sends don't each pay the Graph round-trip cost. At minimum this is worth a log.warn or an in-process soft-TTL so callers know resolution was already attempted.

If the retry-on-every-send behaviour is intentional (to recover from transient failures), a short comment explaining the decision would help.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/msteams/src/send-context.ts
Line: 174-188

Comment:
**`null` resolution result is never cached — causes a redundant Graph API call on every subsequent send**

When `resolveGraphChatId` returns `null` (API down, 401, no chats found, etc.), `graphChatId` is set to `null` locally but the store is **not updated**. Because `ref.graphChatId` remains `undefined`, the `graphChatId === undefined` guard passes again on the next `resolveMSTeamsSendContext` call, firing another Graph request that will likely also fail.

Consider caching the failure too (using a sentinel like an empty string, or adding an explicit `graphChatIdResolved: boolean` flag) so that repeat sends don't each pay the Graph round-trip cost. At minimum this is worth a `log.warn` or an in-process soft-TTL so callers know resolution was already attempted.

If the retry-on-every-send behaviour is intentional (to recover from transient failures), a short comment explaining the decision would help.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 82efd9063a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +116 to +117
const toSend = pendingMessages.splice(0);
const ids = await sendMSTeamsMessages({
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve pending blocks when batched send fails mid-turn

Draining pendingMessages before sendMSTeamsMessages means a single permanent failure in the batch (after retries) drops all later blocks in the same turn. sendMSTeamsMessages sends sequentially and throws on the first failed message, so with this buffering change the trailing blocks are never attempted, whereas previously later deliver() calls still ran independently. This shows up when one block is rejected (for example, a bad attachment) and subsequent plain-text blocks silently disappear.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: dcee47932a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +146 to +149
for (const msg of toSend) {
try {
const msgIds = await sendMessages([msg]);
ids.push(...msgIds);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid resending messages already delivered before batch failure

flushPendingMessages retries every item in toSend after any batch error, but sendMSTeamsMessages sends sequentially and can fail after earlier messages were already posted. In that mid-batch failure case (for example, block 2 has a bad attachment), blocks sent before the failure are sent again in this fallback loop, so users receive duplicate reply blocks.

Useful? React with 👍 / 👎.

Comment on lines +150 to +153
} catch {
// Log individual failure but continue so remaining blocks are sent.
params.log.debug?.("individual message send failed, continuing with remaining blocks");
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Surface flush failure when per-message retries all fail

The per-message fallback swallows each send exception and only emits a debug log, so flushPendingMessages can resolve successfully with ids=[] even when nothing was delivered (e.g., revoked token or blocked conversation). Because deliver() no longer throws, dispatcher onError is never invoked in this path, and the turn is reported as settled without any user-visible failure signal.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

@BradGroux BradGroux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sound batching architecture. Accumulating messages in pendingMessages[] and flushing in a single continueConversation() call is the right fix for Teams silently dropping rapid proactive sends.

Good details:

  • splice(0) drain-then-send avoids double-send on re-entrant flushes
  • Fallback to individual sends on batch failure is solid resilience
  • .finally() on baseMarkDispatchIdle ensures idle signal fires even on flush errors
  • Hoisting tableMode/mediaMaxBytes outside the closure is a nice optimization

Two things to verify:

  1. Does the onSettled callback actually await the returned promise from markDispatchIdle()? If not, the flush is fire-and-forget which could race on shutdown.
  2. Confirm all error paths eventually call markDispatchIdle if it's never called, buffered messages leak silently.

Shares ~400 lines with #49580 and #49585. Needs rebase after those land.

✅ Approve.

…ion call (openclaw#29379)

Teams silently drops blocks 2+ when each deliver() opens its own
continueConversation() call. Accumulate rendered messages across all
deliver() calls and flush them together in markDispatchIdle().

On batch failure, retry each message individually so trailing blocks
are not silently lost. Log a warning when any individual messages fail
so flush failures are visible in logs.
@sudie-codes sudie-codes force-pushed the fix/msteams-multi-block-reply branch from dcee479 to 0b93758 Compare March 20, 2026 19:16
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0b93758db9

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +176 to +180
deliver: async (payload) => {
// Render the payload to messages and accumulate them. All messages from
// this turn are flushed together in markDispatchIdle() so they go out
// in a single continueConversation() call.
const messages = renderReplyPayloadsToMessages([payload], {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep buffered sends under dispatcher idle accounting

deliver now only buffers messages and returns immediately, so the reply dispatcher can become idle before any Teams API sends happen. withReplyDispatcher waits for dispatcher.waitForIdle() before onSettled (src/auto-reply/dispatch.ts), while restart deferral relies on getTotalPendingReplies() (src/gateway/server-reload-handlers.ts); in the msteams webhook path (which is not command-queue tracked), a config-triggered restart can therefore occur during flushPendingMessages(), interrupting the proactive send and dropping buffered blocks.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

@BradGroux BradGroux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solid fix for the multi-block truncation bug. The buffered flush approach with individual retry fallback is well thought out. tableMode/mediaMaxBytes hoisting is a nice cleanup.

The change to onSettled returning the Promise (vs fire-and-forget) is correct and necessary for the batched flush to work.

LGTM ✅

@BradGroux BradGroux merged commit 8b5eeba into openclaw:main Mar 23, 2026
36 of 38 checks passed
frankekn pushed a commit to artwalker/openclaw that referenced this pull request Mar 23, 2026
…ion call (openclaw#29379) (openclaw#49587)

Teams silently drops blocks 2+ when each deliver() opens its own
continueConversation() call. Accumulate rendered messages across all
deliver() calls and flush them together in markDispatchIdle().

On batch failure, retry each message individually so trailing blocks
are not silently lost. Log a warning when any individual messages fail
so flush failures are visible in logs.
alexey-pelykh pushed a commit to remoteclaw/remoteclaw that referenced this pull request Mar 24, 2026
…ion call (openclaw#29379) (openclaw#49587)

Teams silently drops blocks 2+ when each deliver() opens its own
continueConversation() call. Accumulate rendered messages across all
deliver() calls and flush them together in markDispatchIdle().

On batch failure, retry each message individually so trailing blocks
are not silently lost. Log a warning when any individual messages fail
so flush failures are visible in logs.

(cherry picked from commit 8b5eeba)
alexey-pelykh added a commit to remoteclaw/remoteclaw that referenced this pull request Mar 24, 2026
* fix(msteams): resolve Graph API chat ID for DM file uploads (openclaw#49585)

Fixes openclaw#35822 — Bot Framework conversation.id format is incompatible with
Graph API /chats/{chatId}. Added resolveGraphChatId() to look up the
Graph-native chat ID via GET /me/chats, cached in the conversation store.

Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>
(cherry picked from commit 06845a1)

* test: fix fetch mock typing

(cherry picked from commit 0f43dc4)

* fix: restore repo-wide gate after upstream sync

(cherry picked from commit 14074d3)

* test(msteams): align adapter doubles with interfaces

(cherry picked from commit 5b7ae24)

* test: tighten msteams regression assertions

(cherry picked from commit c8a36c6)

* test: dedupe msteams attachment redirects

(cherry picked from commit 017c0dc)

* MSTeams: move outbound session routing behind plugin boundary

(cherry picked from commit 028f3c4)

* fix: remove session-route.ts — depends on missing upstream infrastructure

* test(msteams): cover graph helpers

(cherry picked from commit 1ea2593)

* fix(test): split msteams attachment helpers

(cherry picked from commit 23c8af3)

* test: share directory runtime helpers

(cherry picked from commit 38b0986)

* fix: stabilize build dependency resolution (openclaw#49928)

* build: mirror uuid for msteams

Add uuid to both the msteams bundled extension and the root package so the workspace build can resolve @microsoft/agents-hosting during tsdown while standalone extension installs also have the runtime dependency available.

Regeneration-Prompt: |
  pnpm build failed because @microsoft/agents-hosting 1.3.1 requires uuid in its published JS but does not declare it in its package manifest. The msteams extension dynamically imports that package, and the workspace build resolves it from the root dependency graph. Mirror uuid into the root package for workspace builds and keep it in extensions/msteams/package.json so standalone plugin installs also resolve it. Update the lockfile to match the manifest changes.

* build: prune stale plugin dist symlinks

Remove stale dist and dist-runtime plugin node_modules symlinks before tsdown runs. These links point back into extension installs, and tsdown's clean step can traverse them on rebuilds and hollow out the active pnpm dependency tree before plugin-sdk declaration generation runs.

Regeneration-Prompt: |
  pnpm build was intermittently failing in the plugin-sdk:dts phase after earlier build steps had already run. The symptom looked like missing root packages such as zod, ajv, commander, and undici even though a fresh install briefly fixed the problem. Investigate the build pipeline step by step rather than patching TypeScript errors. Confirm whether rebuilds mutate node_modules, identify the first step that does it, and preserve existing runtime-postbuild behavior.
  The key constraint is that dist and dist-runtime plugin node_modules links are intentional for runtime packaging, so do not remove that feature globally. Instead, make rebuilds safe by deleting only stale symlinks left in generated output before invoking tsdown, so tsdown cleanup cannot recurse back into the live pnpm install tree. Verify with repeated pnpm build runs.
(cherry picked from commit 505d140)

* test(msteams): cover store and live directory helpers

(cherry picked from commit 55e0c63)

* test(msteams): cover setup wizard status

(cherry picked from commit 653d69e)

* test: tighten msteams regression assertions

(cherry picked from commit 689a734)

* refactor: share teams drive upload flow

(cherry picked from commit 6b04ab1)

* test(msteams): cover routing and setup

(cherry picked from commit 774a206)

* msteams: extend MSTeamsAdapter and MSTeamsActivityHandler types; implement self() (openclaw#49929)

- Add updateActivity/deleteActivity to MSTeamsAdapter
- Add onReactionsAdded/onReactionsRemoved to MSTeamsActivityHandler
- Implement directory self() to return bot identity from appId credential
- Add tests for self() in channel.directory.test.ts

(cherry picked from commit 7c3af37)

* test(msteams): cover upload and webhook helpers

(cherry picked from commit 7d11f6c)

* msteams: fix sender allowlist bypass when route allowlist is configured (GHSA-g7cr-9h7q-4qxq) (openclaw#49582)

When a route-level (teams/channel) allowlist was configured but the sender
allowlist (allowFrom/groupAllowFrom) was empty, resolveSenderScopedGroupPolicy
would downgrade the effective group policy from "allowlist" to "open", allowing
any Teams user to interact with the bot.

The fix: when channelGate.allowlistConfigured is true and effectiveGroupAllowFrom
is empty, preserve the configured groupPolicy ("allowlist") rather than letting
it be downgraded to "open". This ensures an empty sender allowlist with an active
route allowlist means deny-all rather than allow-all.

Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>
(cherry picked from commit 897cda7)

* fix(msteams): batch multi-block replies into single continueConversation call (openclaw#29379) (openclaw#49587)

Teams silently drops blocks 2+ when each deliver() opens its own
continueConversation() call. Accumulate rendered messages across all
deliver() calls and flush them together in markDispatchIdle().

On batch failure, retry each message individually so trailing blocks
are not silently lost. Log a warning when any individual messages fail
so flush failures are visible in logs.

(cherry picked from commit 8b5eeba)

* test(msteams): cover poll and file-card helpers

(cherry picked from commit 8ff277d)

* test: dedupe msteams consent auth fixtures

(cherry picked from commit a9d8518)

* refactor: share dual text command gating

(cherry picked from commit b61bc49)

* test: share msteams safe fetch assertions

(cherry picked from commit d4d0091)

* MSTeams: lazy-load runtime-heavy channel paths

(cherry picked from commit da4f825)

* fix(msteams): isolate probe test env credentials

(cherry picked from commit e9078b3)

* test: dedupe msteams policy route fixtures

(cherry picked from commit f2300f4)

* fix: fix remaining openclaw references in cherry-picked msteams files

* fix: adapt cherry-picks for fork TS strictness

* fix: revert cross-cutting refactors, keep msteams-specific changes only

* fix: format cherry-picked files with oxfmt

---------

Co-authored-by: sudie-codes <[email protected]>
Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>
Co-authored-by: Peter Steinberger <[email protected]>
Co-authored-by: Vincent Koc <[email protected]>
Co-authored-by: Gustavo Madeira Santana <[email protected]>
Co-authored-by: Josh Lehman <[email protected]>
furaul pushed a commit to furaul/openclaw that referenced this pull request Mar 24, 2026
…ion call (openclaw#29379) (openclaw#49587)

Teams silently drops blocks 2+ when each deliver() opens its own
continueConversation() call. Accumulate rendered messages across all
deliver() calls and flush them together in markDispatchIdle().

On batch failure, retry each message individually so trailing blocks
are not silently lost. Log a warning when any individual messages fail
so flush failures are visible in logs.
npmisantosh pushed a commit to npmisantosh/openclaw that referenced this pull request Mar 25, 2026
…ion call (openclaw#29379) (openclaw#49587)

Teams silently drops blocks 2+ when each deliver() opens its own
continueConversation() call. Accumulate rendered messages across all
deliver() calls and flush them together in markDispatchIdle().

On batch failure, retry each message individually so trailing blocks
are not silently lost. Log a warning when any individual messages fail
so flush failures are visible in logs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: msteams Channel integration: msteams size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: MS Teams plugin drops all text blocks after the first in multi-block replies

2 participants