Skip to content

msteams: implement Teams AI agent UX best practices#51808

Merged
SidU merged 26 commits intoopenclaw:mainfrom
SidU:claude/migrate-teams-sdk-PKHin
Mar 24, 2026
Merged

msteams: implement Teams AI agent UX best practices#51808
SidU merged 26 commits intoopenclaw:mainfrom
SidU:claude/migrate-teams-sdk-PKHin

Conversation

@SidU
Copy link
Copy Markdown
Contributor

@SidU SidU commented Mar 21, 2026

Summary

Migrates the Teams extension from @microsoft/agents-hosting to the official Teams SDK (@microsoft/teams.apps + @microsoft/teams.api) and implements Microsoft's AI UX best practices for Teams agents.

SDK Migration

  • Replace @microsoft/agents-hosting (CloudAdapter, MsalTokenProvider, authorizeJWT) with Teams SDK App, Client, and JwtValidator
  • Custom lightweight adapter wrapping the Teams SDK REST client for proactive messaging, updateActivity, and deleteActivity
  • JWT validation via createServiceTokenValidator from @microsoft/teams.apps (validates signature via JWKS, audience, issuer, expiration)
  • Token factory pattern — each API call fetches a fresh token instead of caching a potentially stale value
  • User-Agent header on all outbound HTTP requests
  • Wire Teams clientInfo timezone into the agent system prompt

AI UX Features

  • AI-generated label: All outbound messages include the AIGeneratedContent entity and channelData.feedbackLoopEnabled so Teams renders the "AI generated" badge and native thumbs up/down UI
  • Streaming responses (1:1 only): Uses the Teams streaminfo entity protocol with onPartialReply to progressively update messages as the LLM generates tokens, throttled at 1.5s via the shared draft-stream-loop
  • Informative status updates: Sends a randomized status message (e.g. "Scuttling through ideas...") immediately when a message arrives, showing a blue progress bar while the LLM processes
  • Typing indicators: Only sent in group chats (Teams doesn't support them in channels; 1:1 uses streaming instead)
  • Welcome card: Sends an Adaptive Card with configurable prompt starters when the bot is added to a 1:1 chat
  • Feedback with reflective learning: Handles message/submitAction invoke for thumbs up/down. Negative feedback triggers a fire-and-forget background reflection — the agent reviews its response, derives a learning, and proactively messages the user with adjustments. Feedback events persisted as type: \"custom\" / event: \"feedback\" in session JSONL for mining

Infrastructure

  • updateActivity REST helper and updateActivity on MSTeamsTurnContext for streaming
  • invokeResponse handling — no-op in custom adapter (HTTP 200 sent by process())
  • Stream fallback for long replies (>4000 chars) — falls through to chunked delivery
  • Stream finalization for empty text (clears progress bar when agent sends a card via tool)
  • Shared AI_GENERATED_ENTITY constant extracted to ai-entity.ts
  • New config fields: welcomeCard, promptStarters, groupWelcomeCard, feedbackEnabled, feedbackReflection, feedbackReflectionCooldownMs

Bug Fix

  • Copy-pasted image downloads: Added smba.trafficmanager.net to DEFAULT_MEDIA_AUTH_HOST_ALLOWLIST — pasted images arrive via Bot Framework attachment URLs at this host and require the bot's Bearer token, which was previously not sent because the host was missing from the auth allowlist

Known Issues

Closes #51806

Manual test evidence

Tested on Azure VM deployment with Teams Web. 29/30 tests passed, 0 failed, 1 not tested (group chat). Full test report with steps/expected/actual for all 30 tests.

1:1 reply with AI label and streaming:

Bot reply showing AI generated badge

Streaming in progress with typing dots and partial text

Feedback dialog (thumbs up/down):

Feedback dialog - What did you like?

Channel @mention reply (threaded, no streaming):

Channel reply threaded with AI label and feedback buttons

Image handling fix (before/after):

Bot correctly describes pasted image after auth fix

Test plan

  • 277 msteams extension tests pass (31 test files, 1 pre-existing upstream failure)
  • Typecheck clean
  • Formatting clean (pnpm format:check)
  • Manual: Basic reply in 1:1 chat within ~4s
  • Manual: AI label badge ("AI generated") renders on all bot responses
  • Manual: Streaming shows progressive text updates in 1:1 chats with typing dots and Stop button
  • Manual: Long streaming response completes with markdown formatting
  • Manual: Thumbs up feedback dialog + submission confirmed
  • Manual: Thumbs down feedback dialog + server-side reflection confirmed
  • Manual: Welcome card with prompt starters on bot install
  • Manual: View prompts button shows suggestions
  • Manual: Copy-pasted image received and described by bot (after auth fix)
  • Manual: Rapid messages (500ms apart) get separate replies, no duplicates
  • Manual: @mention in channel replies in thread with AI label + feedback
  • Manual: No streaming in channels (single message delivery)
  • Manual: Replies correctly threaded in channels
  • Manual: Channel message without @mention — bot does NOT reply
  • Manual: @mention autocomplete resolves bot
  • Manual: DM allowlist enforcement (drops non-allowlisted users)
  • Manual: Pairing request creation and approval flow
  • Manual: JWT validation rejects unauthenticated requests (401)
  • Manual: HTTPS endpoint reachable via Caddy reverse proxy
  • Manual: Group chat @mention (not tested — no group chat available)

🤖 Generated with Claude Code

@openclaw-barnacle openclaw-barnacle bot added channel: msteams Channel integration: msteams size: XL maintainer Maintainer-authored PR labels Mar 21, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 6cd22d62c6

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 21, 2026

Greptile Summary

This PR migrates the msteams extension from @microsoft/agents-hosting to @microsoft/teams.apps/@microsoft/teams.api and implements Microsoft's mandatory AI UX requirements: the AIGeneratedContent entity on all outbound messages, streaming responses for 1:1 chats via the streaminfo protocol, a welcome Adaptive Card, and a feedback-reflection loop triggered by thumbs-down reactions.

Key changes:

  • sdk.ts fully rewritten — CloudAdapter/MsalTokenProvider replaced with a hand-rolled adapter using @microsoft/teams.api Client; token acquisition uses app.getBotToken() / app.getAppGraphToken()
  • New TeamsHttpStream class (streaming-message.ts) implements progressive chunk delivery with sequence numbers and a shared draft-stream-loop throttle
  • New feedback-reflection.ts module for fire-and-forget reflection on negative feedback with cooldown, session learning persistence, and proactive follow-up
  • monitor.ts replaces authorizeJWT(authConfig) with a Bearer-prefix-only check — inbound JWT signatures and issuer claims are no longer validated (see inline comment); this needs to be addressed before production deployment
  • buildActivity and sendMSTeamsMessages now always attach AI_GENERATED_ENTITY and channelData.feedbackLoopEnabled; six new config fields added to MSTeamsConfig
  • dispatchReplyFromConfig gains a configOverride parameter used for per-sender timezone injection

Confidence Score: 2/5

  • Not safe to merge until the JWT validation gap is resolved — the webhook currently accepts requests from any actor who can guess the endpoint URL.
  • The feature work itself (streaming, welcome card, feedback reflection, AI entity labeling) is well-structured with good test coverage. However, replacing authorizeJWT(authConfig) with a Bearer-prefix check removes the only mechanism that verified incoming activities were legitimately signed by Microsoft Bot Framework. The adapter.process() handler in sdk.ts does not perform inbound JWT validation either, so the security boundary is effectively gone. This is a production-critical gap that warrants blocking the merge regardless of how polished the rest of the PR is.
  • extensions/msteams/src/monitor.ts (JWT middleware) and extensions/msteams/src/sdk.ts (adapter.process — no inbound token validation)
Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/msteams/src/monitor.ts
Line: 256-268

Comment:
**JWT validation completely removed**

The previous `authorizeJWT(authConfig)` middleware from `@microsoft/agents-hosting` performed full JWT validation: signature verification against Microsoft's public keys, issuer check, audience check, and expiry. The new middleware only checks that the Authorization header starts with `"Bearer "` — it accepts any token string, including a random UUID.

The comment says "The App registers its own JWT validation on POST /api/messages, so we delegate auth to the `adapter.process()` call". However, `adapter.process()` in `sdk.ts` never validates the inbound JWT — it only calls `app.getBotToken()` to acquire an *outbound* token for sending replies. There is no inbound signature verification anywhere in the new code path.

This means any actor who can reach the webhook URL can send arbitrary bot activities by including `Authorization: Bearer <anything>` in their request. To fix this, the `@microsoft/teams.apps` `App` instance should have its validation middleware mounted, or validation should be performed manually using Microsoft's OpenID Connect keys (e.g. `https://login.botframework.com/v1/.well-known/openidconfiguration`).

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/msteams/src/feedback-reflection.ts
Line: 286

Comment:
**Unused `core` variable**

`getMSTeamsRuntime()` is called but `core` is never referenced anywhere in `storeSessionLearning`. The function was likely refactored to use direct `fs` imports instead, leaving this dead code behind.

```suggestion
async function storeSessionLearning(params: {
  storePath: string;
  sessionKey: string;
  learning: string;
}): Promise<void> {
  // Use the session store to append a custom event with the learning
  const fs = await import("node:fs/promises");
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/msteams/src/monitor-handler.ts
Line: 242-255

Comment:
**`thumbedDownResponse` never populated**

`runFeedbackReflection` accepts a `thumbedDownResponse` field that is included verbatim in the reflection prompt so the LLM understands exactly what it said wrong. However, the call site here never passes it — the field will always be `undefined`. The `feedbackMessageId` is available but not used to fetch the original message text.

Without the original message content, the reflection prompt only tells the agent "a user indicated your previous response wasn't helpful" with no further context, significantly reducing the quality of any derived learning. Consider fetching the original message text from the session transcript using `feedbackMessageId` before dispatching the reflection, or at a minimum document this as a known limitation.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: extensions/msteams/src/streaming-message.ts
Line: 43-49

Comment:
**Duplicate `AI_GENERATED_ENTITY` constant**

This constant is defined identically in both `messenger.ts` and `streaming-message.ts`. Consider extracting it to a shared module (e.g. `ai-entity.ts`) and importing it in both places to keep the definition DRY and ensure any future schema changes are applied consistently.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "msteams: fix feedbac..."

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b02728c89d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@SidU SidU force-pushed the claude/migrate-teams-sdk-PKHin branch from b02728c to 544b5ad Compare March 22, 2026 00:34
@SidU SidU self-assigned this Mar 22, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 85f55e72ff

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@SidU SidU force-pushed the claude/migrate-teams-sdk-PKHin branch from 85f55e7 to e1f109f Compare March 22, 2026 02:12
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e1f109f9e0

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@SidU SidU force-pushed the claude/migrate-teams-sdk-PKHin branch from e1f109f to a2177b4 Compare March 22, 2026 03:39
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a2177b47b5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: dca792462e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 41b2d7d3f2

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 31b9d65138

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +216 to +221
await core.channel.reply.dispatchReplyFromConfig({
ctx: ctxPayload,
cfg,
dispatcher,
replyOptions,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Close reflection dispatchers after running feedback reflection

runFeedbackReflection calls dispatchReplyFromConfig directly with a newly created dispatcher but never drives the dispatcher lifecycle (markComplete + waitForIdle) via withReplyDispatcher. In this codebase, each dispatcher starts with a reserved pending slot in the global registry, so every negative-feedback reflection can leave a permanent pending reply count and registered dispatcher behind; that can keep restart deferral logic thinking work is still active and delay operational restarts until timeout. Wrap this dispatch path in the settled-dispatch helper (or equivalent lifecycle calls) so reflection dispatchers always unregister.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle bot added the scripts Repository scripts label Mar 22, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1eb16233f4

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

// Create Express server
const expressApp = express.default();
expressApp.use(authorizeJWT(authConfig));
expressApp.use(express.json({ limit: MSTEAMS_WEBHOOK_MAX_BODY_BYTES }));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Authenticate webhook before parsing request bodies

The middleware order now runs express.json(...) before JWT verification, so unauthenticated callers can force JSON parsing (up to DEFAULT_WEBHOOK_MAX_BODY_BYTES) before being rejected. On an internet-exposed Teams webhook this increases per-request CPU/memory cost and makes unauthenticated request floods more effective than the previous ordering (which short-circuited with auth first). Move JWT/auth checks ahead of body parsing (or at least add a cheap pre-parse auth gate) to restore early rejection.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 94a47a3fb5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

const fs = await import("node:fs/promises");
const pathMod = await import("node:path");
const safeKey = route.sessionKey.replace(/[^a-zA-Z0-9_:-]/g, "_");
const transcriptFile = pathMod.join(storePath, `${safeKey}.jsonl`);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Resolve feedback transcript path via session metadata

This writes feedback events to storePath/${safeKey}.jsonl, but OpenClaw transcripts are keyed by sessionId (for example sess-*.jsonl) and resolved through session metadata, not by sessionKey filename. In normal runs this creates an orphan sidecar file, so thumbs-up/down events are not actually appended to the active transcript used for later context/mining.

Useful? React with 👍 / 👎.

// This ensures the agent knows the sender's timezone for time-aware responses
// and proactive sends within the same session.
// Apply Teams clientInfo timezone if no explicit userTimezone is configured.
const senderTimezone = clientInfo?.timezone || conversationRef.timezone;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Read persisted timezone before building effective cfg

conversationRef.timezone here only reflects the current activity (it is populated above only when clientInfo.timezone is present), so when subsequent messages omit clientInfo, senderTimezone becomes undefined and the dispatch drops back to no userTimezone. That causes per-user timezone behavior to disappear intermittently even though timezone was previously persisted in the conversation store.

Useful? React with 👍 / 👎.

@SidU
Copy link
Copy Markdown
Contributor Author

SidU commented Mar 22, 2026

Manual Test Evidence (2026-03-22)

Environment: Azure VM (riley-inbestments.westus2.cloudapp.azure.com), branch claude/migrate-teams-sdk-PKHin
Method: Teams Web (teams.cloud.microsoft) + Playwright browser automation
Results: 29/30 PASS, 0 FAIL, 1 not tested (group chat)

1:1 Personal Chat

Basic reply + AI label + streaming:

  • Sent text message → bot replied within ~4s with "AI generated" badge
  • Streaming: progressive text updates with typing dots and Stop button mid-generation
  • Long response (3 paragraphs) completed with bold formatting, no truncation

Basic reply with AI label

Feedback loop (thumbs up/down):

  • Thumbs up → "What did you like?" dialog → "Feedback submitted." toast
  • Thumbs down → "What went wrong?" dialog → server received both feedback events
  • Server logs confirmed "received feedback" entries for both

Welcome card + View prompts:

  • Adaptive Card on install with 3 prompt starters: "What can you do?", "Summarize my last meeting", "Help me draft an email"
  • "View prompts" button shows "Help — Get help and available commands"

Image handling (bug found and fixed):

  • Pre-fix: pasted image → bot said "I don't see any image." Root cause: smba.trafficmanager.net missing from DEFAULT_MEDIA_AUTH_HOST_ALLOWLIST
  • Post-fix (commit 94a47a3fb5): pasted red square with "FIX" text → bot replied "🎉 I can see it! It's a red square with white text that says 'FIX'"

Rapid messages:

  • Sent 2 messages 500ms apart → bot replied to both separately, no duplicates

Channel (Self > General)

  • @OpenClaw (INT) hi → bot replied "Hey Sid 👋" in thread
  • @OpenClaw (INT) Tell me a joke about lobsters → lobster joke in thread with AI label + feedback buttons
  • No streaming in channels (single message delivery) ✓
  • Replies correctly threaded (not top-level) ✓
  • Message without @mention → bot did NOT reply ✓
  • @mention autocomplete resolved bot with description ✓

Access Control & Security

  • DM from non-allowlisted user → dropped ("dropping dm (not allowlisted)")
  • Pairing request created → approved via CLI → messages processed
  • curl POST /api/messages without token → 401 {"error":"Unauthorized"}

Bug Fixed in This PR

Copy-pasted images not downloaded (commit 94a47a3fb5):
smba.trafficmanager.net (Bot Framework attachment service) was not in DEFAULT_MEDIA_AUTH_HOST_ALLOWLIST. Pasted images arrive as contentType: "image/*" with contentUrl at https://smba.trafficmanager.net/.../v3/attachments/.... Download got 401 but auth token was never added because the host wasn't in the auth allowlist. Fixed by adding "smba.trafficmanager.net" to the allowlist.

Full test report

30 tests across 5 categories (1:1 chat, channels, access control, infrastructure, media). Full report with steps/expected/actual for each test and 14 screenshots available in the test artifacts.

🤖 Generated with Claude Code

@BradGroux
Copy link
Copy Markdown
Contributor

@SidU / @fabianwilliams

Tested the Teams AI UX features end-to-end. Here's what I found:

  • ✅ AI-generated label: showing correctly on bot messages in both DMs and channel chat
  • ✅ Feedback loop (👍/👎): working in both DMs and channel chat
  • ✅ Welcome card: fires correctly when opening a 1:1 chat with the bot
  • ✅ Typing indicator: working in DMs
  • ✅ Streaming responses: working in DMs
  • ❌ Typing indicator in channel chat: not showing.

Per the PR description this is expected (Teams platform limitation for channels), but worth documenting explicitly. I have confirmed the same indicator works in DMs, and not in Channels with my already configured Teams bot (I hadn't noticed before - as most of my conversations are DMs, and we use channels for updates from the Teams bot).

Overall everything is working as described. Nice work! 🚀

@BradGroux
Copy link
Copy Markdown
Contributor

@SidU I went through this closely. This looks mergeable as-is. The migration from @microsoft/agents-hosting to @microsoft/teams.apps / @microsoft/teams.api is well structured, and I don't see anything here that should block merge.

That said, I do think a few follow-up improvements are worth making:

  1. Log invoke-path failures in process()

    • Right now the adapter sends 200 early for invoke activities, which is the right tradeoff for Teams timeout behavior.
    • The downside is that if handler logic throws after the response is sent, that failure can become effectively invisible from the caller side.
    • I’d add explicit logging on the invoke error path so operational failures don’t turn into ghost bugs.
  2. Replace duplicated as unknown as token-method casts with a guarded helper

    • The repeated casts around getBotToken() / getAppGraphToken() in sdk.ts are the main thing I’d tighten up.
    • The current code works, but it depends on method availability in a way that could become a runtime failure if the SDK surface shifts.
    • A single helper with a runtime assertion would make this safer and much easier to reason about.
  3. Tighten the custom ActivityHandler compatibility behavior

    • In buildActivityHandler(), I’d gate conversationUpdate handling on membersAdded?.length instead of treating all conversationUpdate activities the same.
    • I’d also either document the current next() behavior clearly or make the chaining semantics closer to Bot Framework expectations.
    • Not a blocker for this PR, but it’s worth locking down before more behavior gets layered on top.
  4. Add direct tests around the shim layer

    • The migration updates surrounding behavior correctly, but the new adapter/handler/token surface is important enough that it deserves its own focused tests.
    • The areas I’d prioritize are handler dispatch semantics and updateActivity / token error handling.

Net: mergeable, and the implementation direction looks right. The items above feel like cleanup/hardening work rather than reasons to hold the PR.

@SidU SidU force-pushed the claude/migrate-teams-sdk-PKHin branch from 94a47a3 to 47cc4a1 Compare March 22, 2026 22:39
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 47cc4a1342

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

const MAX_RESPONSE_CHARS = 500;

/** Tracks last reflection time per session to enforce cooldown. */
const lastReflectionBySession = new Map<string, number>();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Evict stale entries from feedback reflection cooldown cache

lastReflectionBySession is a process-global Map that only ever grows: recordReflectionTime inserts/updates keys, but there is no production eviction path (only the test helper clears it). In long-lived gateways that receive negative feedback from many different sessions, this leaks one entry per unique session key and can cause unbounded memory growth over time; add TTL-based cleanup or prune expired keys during reads/writes.

Useful? React with 👍 / 👎.

@SidU SidU force-pushed the claude/migrate-teams-sdk-PKHin branch from 47cc4a1 to d2110cc Compare March 24, 2026 03:37
@openclaw-barnacle openclaw-barnacle bot added the docs Improvements or additions to documentation label Mar 24, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 4e1431567c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

const fs = await import("node:fs/promises");
const path = await import("node:path");

const safeKey = params.sessionKey.replace(/[^a-zA-Z0-9_:-]/g, "_");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Strip ':' from session-derived filenames

This sanitizer keeps : (/[^a-zA-Z0-9_:-]/g), but Teams session keys are colon-delimited (for example msteams:user123). On Windows, : is not a valid filename character, so writing/reading ${safeKey}.learnings.json fails for normal sessions and reflection learnings are silently dropped because the caller catches the write error. Replacing : as part of the normalization keeps feedback reflection persistence working cross-platform.

Useful? React with 👍 / 👎.

@SidU SidU force-pushed the claude/migrate-teams-sdk-PKHin branch from 5831b42 to c281897 Compare March 24, 2026 04:57
@SidU SidU merged commit cd90130 into openclaw:main Mar 24, 2026
21 of 22 checks passed
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c281897b3f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Copy Markdown
Contributor

@BradGroux BradGroux left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: msteams Teams SDK migration + AI UX best practices

Verdict: Request changes (one blocking issue, otherwise strong work)

What works well

  • SDK migration is the right call. Moving from @microsoft/agents-hosting to the official Teams SDK is the correct long-term direction. The custom lightweight adapter wrapping the Teams SDK REST client is clean and preserves the core bot flow.
  • Streaming implementation is solid. Uses the streaminfo entity protocol correctly, throttles at 1.5s (matching Teams guidance), handles finalization separately from chunk sends, and falls back gracefully for >4000 char replies.
  • Fresh-token-per-call pattern is a good tradeoff for correctness and revocation hygiene over long-lived cached auth state.
  • AI UX features are thoughtful. The AI-generated label, informative status updates with progress bar, welcome card with prompt starters, and feedback loop all follow Microsoft's current guidance well.
  • 277 tests passing with real behavioral coverage, not padding. Test investment here is meaningful.
  • smba.trafficmanager.net auth fix is justified: copy-pasted images arrive via Bot Framework attachment URLs at this host and genuinely need the bot's Bearer token.
  • User-Agent on all outbound HTTP is good operational hygiene.

Blocking issue: reflection follow-up may leak internal reasoning

In feedback-reflection.ts, the proactive follow-up logic sends reflectionResponse.trim() directly to the user based on string heuristics (contains "follow up" or length < 300). The reflection prompt asks the model to produce both internal adjustment notes and an optional user-facing follow-up, but the code sends the entire reflection response, which can include:

  • Self-critique ("I should be more accurate / less verbose")
  • Meta-commentary intended for internal behavior shaping
  • Conversation-derived internal notes never meant for user display

This blurs internal reflection and user-facing output, creating privacy/product weirdness and making the system behavior prompt-sensitive rather than structurally safe.

Recommended fix:

  • Parse structured output with explicit fields: learning, shouldFollowUp, followUpMessage
  • Store only learning
  • Send only followUpMessage when explicitly present and allowed
  • On parse failure, store the learning and skip the proactive send entirely

Non-blocking discussion items

  1. Shared sequenceNumber across informative updates and streaming chunks. Worth confirming this exactly matches Teams expectations. If Teams treats informative and streaming as separate ordered streams, this is the first place to look when odd production behavior surfaces.

  2. Stream failure after informative start. The "best effort + fallback" posture is right for Teams, but I would want integration test coverage for: mid-stream API failure producing no duplicate/conflicting final delivery, and stream exceeding 4000 chars after informative update closing cleanly with full reply still landing.

  3. Cooldown pruning uses the default cooldown instead of the configured cooldown value. Minor inconsistency, but it means memory cleanup semantics do not exactly match runtime policy.

  4. Merge conflicts. PR is currently CONFLICTING. Given the scope (transport, UX, config, tests across 39 files), expect non-trivial conflict resolution with regression risk.

  5. Migration risk for existing installations. Core reply flow looks safe. Edge cases around proactive replies, serviceUrl/token scope behavior, and attachment retrieval/upload are where regressions are most likely to surface.

Test coverage gaps

Would specifically want tests for:

  • Reflection includes internal note + "follow up: no" -> no user message sent
  • Reflection includes internal note + draft follow-up -> only the draft follow-up sent, not the full blob
  • Malformed reflection output -> learning stored, no proactive user message
  • Stream exceeds 4000 chars after informative update -> clean close + full reply lands
  • Teams API failure mid-stream chunk -> no duplicate or conflicting final delivery

Bottom line

Strong PR, right direction, substantial work. The reflection follow-up behavior is the one thing that needs fixing before merge: structured output parsing instead of string heuristics, strict separation of internal learning from user-facing messages. Everything else is solid or discussion-level.

hzq001 pushed a commit to hzq001/openclaw that referenced this pull request Mar 24, 2026
Migrates the Teams extension from @microsoft/agents-hosting to the official Teams SDK (@microsoft/teams.apps + @microsoft/teams.api) and implements Microsoft's AI UX best practices for Teams agents.

- AI-generated label on all bot messages (Teams native badge + thumbs up/down)
- Streaming responses in 1:1 chats via Teams streaminfo protocol
- Welcome card with configurable prompt starters on bot install
- Feedback with reflective learning (negative feedback triggers background reflection)
- Typing indicators for personal + group chats (disabled for channels)
- Informative status updates (progress bar while LLM processes)
- JWT validation via Teams SDK createServiceTokenValidator
- User-Agent: teams.ts[apps]/<sdk-version> OpenClaw/<version> on outbound requests
- Fix copy-pasted image downloads (smba.trafficmanager.net auth allowlist)
- Pre-parse auth gate (reject unauthenticated requests before body parsing)
- Reflection dispatcher lifecycle fix (prevent leaked dispatchers)
- Colon-safe session filenames (Windows compatibility)
- Cooldown cache eviction (prevent unbounded memory growth)

Closes openclaw#51806
@steipete
Copy link
Copy Markdown
Contributor

Follow-up review after merge: I patched the remaining issues locally and validated them.

Changes in the follow-up patch:

  • Reflection output is now parsed as structured JSON, so we persist only the internal learning field and only send an explicit userMessage when the model asks for a follow-up.
  • Reflection follow-ups are now DM-only, so channel/group thumbs-down events can no longer echo internal self-critique back into the room.
  • Cooldown pruning now respects feedbackReflectionCooldownMs instead of always pruning with the default 5 minute window.
  • The Teams informative status update is now actually wired on reply start for personal chats; the helper existed, but production code was not calling it.

Validation:

  • pnpm test -- extensions/msteams/src/feedback-reflection.test.ts extensions/msteams/src/reply-dispatcher.test.ts extensions/msteams/src/streaming-message.test.ts
  • pnpm build

tiagonix pushed a commit to tiagonix/openclaw that referenced this pull request Mar 24, 2026
Migrates the Teams extension from @microsoft/agents-hosting to the official Teams SDK (@microsoft/teams.apps + @microsoft/teams.api) and implements Microsoft's AI UX best practices for Teams agents.

- AI-generated label on all bot messages (Teams native badge + thumbs up/down)
- Streaming responses in 1:1 chats via Teams streaminfo protocol
- Welcome card with configurable prompt starters on bot install
- Feedback with reflective learning (negative feedback triggers background reflection)
- Typing indicators for personal + group chats (disabled for channels)
- Informative status updates (progress bar while LLM processes)
- JWT validation via Teams SDK createServiceTokenValidator
- User-Agent: teams.ts[apps]/<sdk-version> OpenClaw/<version> on outbound requests
- Fix copy-pasted image downloads (smba.trafficmanager.net auth allowlist)
- Pre-parse auth gate (reject unauthenticated requests before body parsing)
- Reflection dispatcher lifecycle fix (prevent leaked dispatchers)
- Colon-safe session filenames (Windows compatibility)
- Cooldown cache eviction (prevent unbounded memory growth)

Closes openclaw#51806
siofra-seksbot added a commit to TheBotsters/botster-ego that referenced this pull request Mar 25, 2026
* Formatting fixes and remove trailing dash acceptance

* Remove lower casing -- preserving prior behavior

* fix: preserve legacy clawhub skill updates (openclaw#53206) (thanks @drobison00)

* feat(csp): support inline script hashes in Control UI CSP (openclaw#53307) thanks @BunsDev

Co-authored-by: BunsDev <[email protected]>
Co-authored-by: Nova <[email protected]>

* refactor: separate exec policy and execution targets

* test: print failed test lane output tails

* fix(cron): make --tz work with --at for one-shot jobs

Previously, `--at` with an offset-less ISO datetime (e.g. `2026-03-23T23:00:00`)
was always interpreted as UTC, even when `--tz` was provided. This caused one-shot
jobs to fire at the wrong time.

Changes:
- `parseAt()` now accepts an optional `tz` parameter
- When `--tz` is provided with `--at`, offset-less datetimes are interpreted in
  that IANA timezone using Intl.DateTimeFormat
- Datetimes with explicit offsets (e.g. `+01:00`, `Z`) are unaffected
- Removed the guard in cron-edit that blocked `--tz` with `--at`
- Updated `--at` help text to mention `--tz` support
- Added 2 tests verifying timezone resolution and offset preservation

* fix: land cron tz one-shot handling and prerelease config warnings (openclaw#53224) (thanks @RolfHegr)

* fix: clean changelog merge duplication (openclaw#53224) (thanks @RolfHegr)

* test: isolate line jiti runtime smoke

* refactor: harden extension runtime-api seams

* tests: improve boundary audit coverage and safety (openclaw#53080)

* tools: extend seam audit inventory

* tools: tighten seam audit heuristics

* tools: refine seam test matching

* tools: refine seam audit review heuristics

* style: format seam audit script

* tools: widen seam audit matcher coverage

* tools: harden seam audit coverage

* tools: tighten boundary audit matchers

* tools: ignore mocked import matches in boundary audit

* test: include native command reply seams in audit

* fix: command auth SecretRef resolution (openclaw#52791) (thanks @Lukavyi)

* fix(command-auth): handle unresolved SecretRef in resolveAllowFrom

* fix(command-auth): fall back to config allowlists

* fix(command-auth): avoid duplicate resolution fallback

* fix(command-auth): fail closed on invalid allowlists

* fix(command-auth): isolate fallback resolution errors

* fix: record command auth SecretRef landing notes (openclaw#52791) (thanks @Lukavyi)

---------

Co-authored-by: Ayaan Zaidi <[email protected]>

* refactor: extract cron schedule and test runner helpers

* fix: populate currentThreadTs in threading tool context fallback for Telegram DM topics (openclaw#52217)

When a channel plugin lacks a custom buildToolContext (e.g. Telegram),
the fallback path in buildThreadingToolContext did not set currentThreadTs
from the inbound MessageThreadId. This caused resolveTelegramAutoThreadId
to return undefined, so message tool sends without explicit threadId
would route to the main chat instead of the originating DM topic.

Fixes openclaw#52217

* fix: unblock runtime-api smoke checks

* refactor: split tracked ClawHub update flows

* build: prepare 2026.3.23-2

* fix: preserve command auth resolution errors on empty inferred allowlists

* docs: refresh plugin-sdk api baseline

* test: harden linux runtime smoke guards

* fix(runtime): anchor bundled plugin npm staging to active node

* tests: cron coverage and NO_REPLY delivery fixes (openclaw#53366)

* tools: extend seam audit inventory

* tools: audit cron seam coverage gaps

* test: add cron seam coverage tests

* fix: avoid marking NO_REPLY cron deliveries as delivered

* fix: clean up delete-after-run NO_REPLY cron sessions

* fix: verify global npm correction installs

* build: prepare 2026.3.24

* docs: update mac release automation guidance

* fix: fail closed when provider inference drops errored allowlists

* fix: reject nonexistent zoned cron at-times

* fix: hash inline scripts with data-src attributes

* ci: balance shards and reuse pr artifacts

* refactor: simplify provider inference and zoned parsing helpers

* fix: unify live model auth gating

* tests: add boundary coverage for media delivery (openclaw#53361)

* tests: add boundary coverage for media delivery

* tests: isolate telegram outbound adapter transport

* tests: harden telegram webhook certificate assertion

* tests: fix guardrail false positives on rebased branch

* msteams: extract structured quote/reply context (openclaw#51647)

* msteams: extract structured quote/reply context from Teams HTML attachments

* msteams: address PR openclaw#51647 review feedback

* msteams: add message edit and delete support (openclaw#49925)

- Add edit/delete action handlers with toolContext.currentChannelId
  fallback for in-thread edits/deletes without explicit target
- Add editMessageMSTeams/deleteMessageMSTeams to channel runtime
- Add updateActivity/deleteActivity to SendContext and MSTeamsTurnContext
- Extend content param with text/content/message fallback chain
- Update test mocks for new SendContext shape

Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>

* fix(doctor): honor --fix in non-interactive mode

Ensure repair-mode doctor prompts auto-accept recommended fixes even when running non-interactively, while still requiring --force for aggressive rewrites.

This restores the expected behavior for upgrade/doctor flows that rely on 'openclaw doctor --fix --non-interactive' to repair stale gateway service configuration such as entrypoint drift after global updates.

Co-authored-by: Copilot <[email protected]>

* Preserve no-restart during update doctor fixes

Co-authored-by: Copilot <[email protected]>

* fix(doctor): skip service config repairs during updates

Co-authored-by: Copilot <[email protected]>

* fix: add config clobber forensics

* fix(ui): resolve model provider from catalog instead of stale session default

When the server returns a bare model name (e.g. "deepseek-chat") with
a session-level modelProvider (e.g. "zai"), the UI blindly prepends
the provider — producing "zai/deepseek-chat" instead of the correct
"deepseek/deepseek-chat". This causes "model not allowed" errors
when switching between models from different providers.

Root cause: resolveModelOverrideValue() and resolveDefaultModelValue()
in app-render.helpers.ts, plus the /model slash command handler in
slash-command-executor.ts, all call resolveServerChatModelValue()
which trusts the session's default provider. The session provider
reflects the PREVIOUS model, not the newly selected one.

Fix: for bare model names, create a raw ChatModelOverride and resolve
through normalizeChatModelOverrideValue() which looks up the correct
provider from the model catalog. Falls back to server-provided provider
only if the catalog lookup fails. All 3 call sites are fixed.

Closes openclaw#53031

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Signed-off-by: HCL <[email protected]>

* style(ui): polish agent file preview and usage popovers (openclaw#53382)

* feat: make workspace links clickable in agent context card and files list

Updated the agent context card and files list to render workspace names as clickable links, allowing users to easily access the corresponding workspace files. This enhances usability by providing direct navigation to the workspace location.

* style(ui): polish markdown preview dialog

* style(ui): reduce markdown preview list indentation

* style(ui): update markdown preview dialog width and alignment

* fix(ui): open usage filter popovers toward the right

* style(ui): adjust positioning of usage filter and export popovers

* style(ui): update sidebar footer padding and modify usage header z-index

* style(ui): adjust positioning of usage filter popover to the left and export popover to the right

* style(ui): simplify workspace link rendering in agent context card

* UI: make workspace paths interactive buttons or plain text

Agent Context card workspace (Channels/Cron panels): replace non-interactive
<div> with a real <button> wired to onSelectPanel('files'), matching the
Overview panel pattern.

Core Files footer workspace: drop workspace-link class since the user is
already on the Files panel — keep as plain text.

* fix(agents): suppress heartbeat prompt for cron-triggered embedded runs

Prevent cron-triggered embedded runs from inheriting the default heartbeat prompt so non-cron session targets stop reading HEARTBEAT.md and polluting scheduled turns.

Made-with: Cursor

* test(agents): cover additional heartbeat prompt triggers

Document that default-agent heartbeat prompt injection still applies to memory-triggered and triggerless runs while cron remains excluded.

Made-with: Cursor

* fix: land cron heartbeat prompt suppression (openclaw#53152) (thanks @Protocol-zero-0)

* msteams: implement Teams AI agent UX best practices (openclaw#51808)

Migrates the Teams extension from @microsoft/agents-hosting to the official Teams SDK (@microsoft/teams.apps + @microsoft/teams.api) and implements Microsoft's AI UX best practices for Teams agents.

- AI-generated label on all bot messages (Teams native badge + thumbs up/down)
- Streaming responses in 1:1 chats via Teams streaminfo protocol
- Welcome card with configurable prompt starters on bot install
- Feedback with reflective learning (negative feedback triggers background reflection)
- Typing indicators for personal + group chats (disabled for channels)
- Informative status updates (progress bar while LLM processes)
- JWT validation via Teams SDK createServiceTokenValidator
- User-Agent: teams.ts[apps]/<sdk-version> OpenClaw/<version> on outbound requests
- Fix copy-pasted image downloads (smba.trafficmanager.net auth allowlist)
- Pre-parse auth gate (reject unauthenticated requests before body parsing)
- Reflection dispatcher lifecycle fix (prevent leaked dispatchers)
- Colon-safe session filenames (Windows compatibility)
- Cooldown cache eviction (prevent unbounded memory growth)

Closes openclaw#51806

* refactor: tighten embedded prompt and sidecar guards

* test: audit subagent seam coverage inventory

* test: add exact-stem subagent seam tests

* refactor: clarify doctor repair flow

* fix(plugins): make Matrix recovery paths tolerate stale plugin config (openclaw#52899)

* fix(plugins): address review feedback for Matrix recovery paths (openclaw#52899)

1. Narrow loadConfigForInstall() to catch only INVALID_CONFIG errors,
   letting real failures (fs permission, OOM) propagate.
2. Assert allow array is properly cleaned in stale-cleanup test.
3. Add comment clarifying version-resolution is already addressed via
   the shared VERSION constant.
4. Run cleanStaleMatrixPluginConfig() during install so
   persistPluginInstall() → writeConfigFile() does not fail validation
   on stale Matrix load paths.

* fix(plugins): address review feedback for Matrix recovery paths (openclaw#52899)

* fix: fetch model catalog for slash command updates

* fix: restore teams sdk adapter contracts

* fix: keep slash command model qualification on rebase

* fix: clear production dependency advisories

* fix: delete subagent runs after announce give-up

* refactor: polish trigger and manifest seams

* refactor(ui): extract chat model resolution state

* fix(feishu): preserve docx block tree order (openclaw#40524)

Verified:
- pnpm install --frozen-lockfile
- pnpm build
- pnpm vitest run extensions/feishu/src/docx.test.ts

Co-authored-by: Tao Xie <[email protected]>

* fix: stabilize matrix and teams ci assertions

* fix: preserve subagent ended hooks until runtime init

* test: prune low-signal live model sweeps

* test: harden parallels smoke harness

* fix: preserve direct subagent dispatch failures on abort

* fix: report dropped subagent announce queue deliveries

* fix: unblock live harness provider discovery

* fix: finalize resumed subagent cleanup give-ups

* refactor: centralize plugin install config policy

* fix: format subagent registry test

* fix: finalize deferred subagent expiry cleanup

* fix(tui): preserve user message during slow model responses (openclaw#53115)

When a local run ends with an empty final event while another run is active,
skip history reload to prevent clearing the user's pending message from the
chat log. This fixes the 'message disappears' issue with slow models like Ollama.

* fix: preserve deferred TUI history sync (openclaw#53130) (thanks @joelnishanth)

* test: sync app chat model override expectation

* feat(ui): Control UI polish — skills revamp, markdown preview, agent workspace, macOS config tree (openclaw#53411) thanks @BunsDev

Co-authored-by: BunsDev <[email protected]>
Co-authored-by: Nova <[email protected]>

* fix(security): resolve Aisle findings — skill installer validation, terminal sanitization, URL scheme allowlisting (openclaw#53471) thanks @BunsDev

Co-authored-by: BunsDev <[email protected]>
Co-authored-by: Nova <[email protected]>

* fix: widen installer regex allowlists and deduplicate safeExternalHref calls

- SAFE_GO_MODULE: allow uppercase in module paths (A-Z)
- SAFE_BREW_FORMULA: allow @ for versioned formulas ([email protected])
- SAFE_UV_PACKAGE: allow extras [standard] and equality pins ==
- Cache safeExternalHref result in skills detail API key section

* docs: update CONTRIBUTING.md

* test: continue vitest threads migration

* test: continue vitest threads migration

* test: harden threaded shared-worker suites

* test: harden threaded channel follow-ups

* test: defer slack bolt interop for helper-only suites

* fix(agents): harden edit tool recovery (openclaw#52516)

Merged via squash.

Prepared head SHA: e23bde8
Co-authored-by: mbelinky <[email protected]>
Co-authored-by: mbelinky <[email protected]>
Reviewed-by: @mbelinky

* fix(docs): correct json55 typo to json5 in IRC channel docs (openclaw#50831) (openclaw#50842)

Merged via squash.

Prepared head SHA: 0f743bf
Co-authored-by: Hollychou924 <[email protected]>
Co-authored-by: altaywtf <[email protected]>
Reviewed-by: @altaywtf

* fix(secrets): prevent unresolved SecretRef from crashing embedded agent runs

Root cause: Telegram channel monitor captures config at startup before secrets
are resolved and passes it as configOverride into the reply pipeline. Since
getReplyFromConfig() uses configOverride directly (skipping loadConfig() which
reads the resolved runtime snapshot), the unresolved SecretRef objects propagate
into FollowupRun.run.config and crash runEmbeddedPiAgent().

Fix (defense in depth):
- get-reply.ts: detect unresolved SecretRefs in configOverride and fall back to
  loadConfig() which returns the resolved runtime snapshot
- message-tool.ts: try-catch around schema/description building at tool creation
  time so channel discovery errors don't crash the agent
- message-tool.ts: detect unresolved SecretRefs in pre-bound config at tool
  execution time and fall back to gateway secret resolution

Fixes: openclaw#45838

* fix: merge explicit reply config overrides onto fresh config

* fix: clean up failed non-thread subagent spawns

* fix: initialize plugins before killed subagent hooks

* fix: report qmd status counts from real qmd manager (openclaw#53683) (thanks @neeravmakwana)

* fix(memory): report qmd status counts from index

* fix(memory): reuse full qmd manager for status

* fix(memory): harden qmd status manager lifecycle

* fix: ci

* fix: finalize killed delete-mode subagent cleanup

* fix: clean up attachments for killed subagent runs

* feat(cli): support targeting running containerized openclaw instances (openclaw#52651)

Signed-off-by: sallyom <[email protected]>

* fix: ci

* Telegram: recover General topic bindings (openclaw#53699)

Merged via squash.

Prepared head SHA: 546f0c8
Co-authored-by: huntharo <[email protected]>
Co-authored-by: huntharo <[email protected]>
Reviewed-by: @huntharo

* fix: clean up attachments for released subagent runs

* fix(ci): do not cancel in-progress main runs

* fix: clean up attachments for orphaned subagent runs

* test: speed up discord extension suites

* test: speed up slack extension suites

* test: speed up telegram extension suites

* test: speed up whatsapp and shared test suites

* fix(ci): do not cancel in-progress bun runs on main

* fix: clean up attachments when replacing subagent runs

* feat(discord): add autoThreadName 'generated' strategy (openclaw#43366)

* feat(discord): add autoThreadName 'generated' strategy

Adds async thread title generation for auto-created threads:
- autoThread: boolean - enables/disables auto-threading
- autoThreadName: 'message' | 'generated' - naming strategy
- 'generated' uses LLM to create concise 3-6 word titles
- Includes channel name/description context for better titles
- 10s timeout with graceful fallback

* Discord: support non-key auth for generated thread titles

* Discord: skip fallback auto-thread rename

* Discord: normalize generated thread title first content line

* Discord: split thread title generation helpers

* Discord: tidy thread title generation constants and order

* Discord: use runtime fallback model resolution for thread titles

* Discord: resolve thread-title model aliases

* Discord: fallback thread-title model selection to runtime defaults

* Agents: centralize simple completion runtime

* fix(discord): pass apiKey to complete() for thread title generation

The setRuntimeApiKey approach only works for full agent runs that use
authStorage.getApiKey(). The pi-ai complete() function expects apiKey
directly in options or falls back to env vars — it doesn't read from
authStorage.runtimeOverrides.

Fixes thread title generation for Claude/Anthropic users.

* fix(agents): return exchanged Copilot token from prepareSimpleCompletionModel

The recent thread-title fix (3346ba6) passes prepared.auth.apiKey to
complete(). For github-copilot, this was still the raw GitHub token
rather than the exchanged runtime token, causing auth failures.

Now setRuntimeApiKeyForCompletion returns the resolved token and
prepareSimpleCompletionModel includes it in auth.apiKey, so both the
authStorage path and direct apiKey pass-through work correctly.

* fix(agents): catch auth lookup exceptions in completion model prep

getApiKeyForModel can throw for credential issues (missing profile, etc).
Wrap in try/catch to return { error } for fail-soft handling rather than
propagating rejected promises to callers like thread title generation.

* Discord: strip markdown wrappers from generated thread titles

* Discord/agents: align thread-title model and local no-auth completion headers

* Tests: import fresh modules for mocked thread-title/simple-completion suites

* Agents: apply exchanged Copilot baseUrl in simple completions

* Discord: route thread runtime imports through plugin SDK

* Lockfile: add Discord pi-ai runtime dependency

* Lockfile: regenerate Discord pi-ai runtime dependency entries

* Agents: use published Copilot token runtime module

* Discord: refresh config baseline and lockfile

* Tests: split extension runs by isolation

* Discord: add changelog for generated thread titles (openclaw#43366) (thanks @davidguttman)

---------

Co-authored-by: Onur Solmaz <[email protected]>
Co-authored-by: Onur Solmaz <[email protected]>

* add missing autoArchiveDuration to DiscordGuildChannelConfig type (openclaw#43427)

* add missing autoArchiveDuration to DiscordGuildChannelConfig type

The autoArchiveDuration field is present in the Zod schema
(DiscordGuildChannelSchema) and actively used at runtime in
threading.ts and allow-list.ts, but was missing from the
canonical TypeScript type definition.

Add autoArchiveDuration to DiscordGuildChannelConfig to align
the type with the schema and runtime usage.

* Discord: add changelog for config type fix (openclaw#43427) (thanks @davidguttman)

---------

Co-authored-by: Onur Solmaz <[email protected]>

* refactor: dedupe test and script helpers

* test: speed up discord extension suites

* test: speed up slack extension suites

* test: speed up telegram extension suites

* test: speed up signal and whatsapp extension suites

* fix(discord): avoid bundling pi-ai runtime deps

* fix(lockfile): sync discord dependency removal

* test: speed up discord slack telegram suites

* test: speed up whatsapp and signal suites

* test: speed up google and twitch suites

* test: speed up core unit suites

* fix: preserve cleanup hooks after subagent register failure

* fix: preserve session cleanup hooks after subagent announce

* Feishu: avoid CLI startup failure on unresolved SecretRef

* fix(doctor): add missing baseUrl and models when migrating nano-banana apiKey to google provider

The legacy nano-banana-pro skill migration moves the Gemini API key to
models.providers.google.apiKey but does not populate the required baseUrl
and models fields on the provider entry. When the google provider object
is freshly created (no pre-existing config), the resulting config fails
Zod validation on write:

  Config validation failed: models.providers.google.baseUrl:
  Invalid input: expected string, received undefined

Fix: default baseUrl to 'https://generativelanguage.googleapis.com' and
models to [] when they are not already set, matching the defaults used
elsewhere in the codebase (embeddings-gemini, pdf-native-providers).

Fixes the 'doctor --fix' crash for users who only have a legacy
nano-banana-pro skill entry and no existing models.providers.google.

* fix: use v1beta for migrated google nano banana provider (openclaw#53757) (thanks @mahopan)

* docs: add changelog for PR openclaw#53675 (thanks @hpt)

* fix(msteams): harden feedback reflection follow-ups

* test: stabilize preaction process title assertion (openclaw#53808)

Regeneration-Prompt: |
  Current origin/main fails src/cli/program/preaction.test.ts because the
  test asserts on process.title directly inside Vitest, where that runtime
  interaction is not stable enough to observe the write reliably. Keep the
  production preaction behavior unchanged. Make the test verify that the
  hook assigns the expected title by wrapping process.title with a local
  getter/setter during each test and restoring the original descriptor
  afterward so other tests keep the real process object behavior.

* fix(auth): protect fresher codex reauth state

- invalidate cached Codex CLI credentials when auth.json changes within the TTL window
- skip external CLI sync when the stored Codex OAuth credential is newer
- cover both behaviors with focused regression tests

Refs openclaw#53466

Co-authored-by: Copilot <[email protected]>

* fix: return structured errors for subagent control send failures

* refactor: centralize google API base URL handling

* refactor(msteams): split reply and reflection helpers

* refactor(auth): unify external CLI credential sync

* refactor: split feishu runtime and inspect secret resolution

* test(memory): clear browser and plugin caches between cases

* fix(types): add workspace module shims

* fix: avoid duplicate orphaned subagent resumes

* test(memory): enable lower-interval heap snapshots

* fix: audit clobbered config reads

* fix(whatsapp): filter fromMe messages in groups to prevent infinite loop (openclaw#53386)

* fix: suppress only recent whatsapp group echoes (openclaw#53624) (thanks @w-sss)

* test: speed up slack and telegram suites

* test: speed up cli and model command suites

* test: speed up command runtime suites

* test: speed up backup and doctor suites

* fix(memory): avoid caching status-only managers

* fix: stabilize logging config imports

* fix(slack): improve interactive reply parity (openclaw#53389)

* fix(slack): improve interactive reply parity

* fix(slack): isolate reply interactions from plugins

* docs(changelog): note slack interactive parity fixes

* fix(slack): preserve preview text for local agent replies

* fix(agent): preserve directive text in local previews

* test: preserve child_process exports in restart bun mock

* fix(memory): avoid caching qmd status managers

* test: speed up browser and gateway suites

* test: speed up media fetch suite

* fix(acp): deliver final result text as fallback when no blocks routed

- Check routedCounts.final to detect prior delivery
- Skip fallback for ttsMode='all' to avoid duplicate TTS processing
- Use delivery.deliver for proper routing in cross-provider turns
- Fixes openclaw#46814 where ACP child run results were not delivered

* fix: tighten ACP final fallback semantics (openclaw#53692) (thanks @w-sss)

* fix: unify pi runner usage snapshot fallback

* refactor: isolate ACP final delivery flow

* fix(ci): stop dropping pending main workflow runs

* test(memory): isolate new unit hotspot files

* test(memory): isolate browser remote-tab hotspot

* test(memory): isolate plugin-core hotspot

* test(memory): isolate telegram bot hotspot

* fix: continue subagent kill after session store write failures

* test(memory): isolate telegram fetch hotspot

* test: speed up plugin-sdk and cron suites

* test: speed up browser suites

* test(memory): isolate telegram monitor hotspot

* test(memory): isolate slack action-runtime hotspot

* test(memory): recycle shared channels batches

* fix: fail closed when subagent steer remap fails

* Providers: fix kimi-coding thinking normalization

* Providers: fix kimi fallback normalization

* Plugins: resolve sdk aliases from the running CLI

* Plugins: trust only startup cli sdk roots

* Plugins: sanitize sdk export subpaths

* Webchat: handle bare /compact as session compaction

* Chat UI: tighten compact transport handling

* Chat UI: guard compact retries

* fix: ignore stale subagent steer targets

* fix(discord): notify user on discord when inbound worker times out (openclaw#53823)

* fix(discord): notify user on discord when inbound worker times out.

* fix(discord): notify user on discord when inbound worker times out.

* Discord: await timeout fallback reply

* Discord: add changelog for timeout reply fix (openclaw#53823) (thanks @Kimbo7870)

---------

Co-authored-by: VioGarden <[email protected]>
Co-authored-by: Onur Solmaz <[email protected]>

* refactor(channels): route registry lookups through runtime

* refactor(plugins): make runtime registry lazy

* refactor(plugins): make hook runner global lazy

* refactor(plugins): make command registry lazy

* fix: allow compact retry after failed session compaction (openclaw#53875)

* refactor(gateway): make plugin fallback state lazy

* refactor(plugins): make interactive state lazy

* fix(memory): align status manager concurrency test

* fix(runtime): stabilize dist runtime artifacts (openclaw#53855)

* fix(build): stabilize lazy runtime entry paths

* fix(runtime): harden bundled plugin npm staging

* docs(changelog): note runtime artifact fixes

* fix(runtime): stop trusting npm_execpath

* fix(runtime): harden Windows npm staging

* fix(runtime): add safe Windows npm fallback

* ci: start required checks earlier (openclaw#53844)

* ci: start required checks earlier

* ci: restore pnpm in security-fast

* ci: skip docs-only payloads in early check jobs

* ci: harden untrusted pull request execution

* ci: pin gradle setup action

* ci: normalize pull request concurrency cancellation

* ci: remove duplicate early-lane setup

* ci: keep install-smoke push runs unique

* fix: unblock supervisor and memory gate failures

* test: stabilize low-profile parallel gate

* refactor(core): make event and queue state lazy

* fix(ci): refresh plugin sdk baseline and formatting

* chore: refresh plugin sdk api baseline

* fix: ignore stale subagent kill targets

* perf(plugins): scope web search plugin loads

* fix: ignore stale subagent send targets

* fix: validate agent workspace paths before writing identity files (openclaw#53882)

* fix: validate agent workspace paths before writing identity files

* Feedback updates and formatting fixes

* refactor: dedupe tests and harden suite isolation

* test: fix manifest registry fixture typing

* fix: ignore stale bulk subagent kill targets

* fix(cli): precompute bare root help startup path

* fix(test): stabilize npm runner path assertion

* test(gateway): align safe open error code

* test: speed up targeted unit suites

* fix: prefer current subagent targets over stale rows

* fix(ci): use target-platform npm path semantics

* Adjust CLI backend environment handling before spawn (openclaw#53921)

security(agents): sanitize CLI backend env overrides before spawn

* fix: surface finished subagent send targets

* perf(memory): avoid eager provider init on empty search

* fix(test): satisfy cli backend config typing

* fix: let subagent kill cascade through ended parents

* perf(sqlite): use existence probes for empty memory search

* fix: allow follow-up sends to finished subagents

* fix: steer ended subagent orchestrators with live descendants

* test: speed up browser pw-tools-core suites

* test: speed up memory and secrets suites

* fix(ci): align lazy memory provider tests

* fix(test): stabilize memory vector dedupe assertion

* fix(test): isolate github copilot token imports

* fix: keep active-descendant subagents visible in reply status

* refactor: dedupe helpers and source seams

* test: fix rebase gate regressions

* Adjust Feishu webhook request body limits (openclaw#53933)

* fix: dedupe stale subagent rows in reply views

* ci: batch shared extensions test lane

* fix: report deduped subagent totals

* fix: dedupe verbose subagent status counts

* fix: align /agents ids with subagent targets

* refactor: dedupe test helpers and harnesses

* perf(memory): builtin sqlite hot-path follow-ups (openclaw#53939)

* chore(perf): start builtin sqlite hotpath workstream

* perf(memory): reuse sqlite statements during sync

* perf(memory): snapshot file state during sync

* perf(memory): consolidate status sqlite reads

* docs(changelog): note builtin sqlite perf work

* perf(memory): avoid session table scans on targeted sync

* test: speed up memory provider suites

* test: speed up slack monitor suites

* test: speed up discord channel suites

* test: speed up telegram and whatsapp suites

* ci: increase test shard fanout

* fix: clean up matrix /agents binding labels

* fix: dedupe active child session counts

* fix: dedupe restarted descendant session counts

* fix: blcok non-owner authorized senders from chaning /send policy (openclaw#53994)

* fix(slack): trim DM reply overhead and restore Codex auto transport (openclaw#53957)

* perf(slack): instrument runtime and trim DM overhead

* perf(slack): lazy-init draft previews

* perf(slack): add turn summary diagnostics

* perf(core): trim repeated runtime setup noise

* perf(core): preselect default web search providers

* perf(agent): restore OpenAI auto transport defaults

* refactor(slack): drop temporary perf wiring

* fix(slack): address follow-up review notes

* fix(security): tighten slack and runtime defaults

* style(web-search): fix import ordering

* style(agent): remove useless spread fallback

* docs(changelog): note slack runtime hardening

* test: speed up discord monitor suites

* test: speed up cli and command suites

* test: speed up slack monitor suites

* fix: ignore stale rows in subagent activity checks

* fix: prefer latest subagent rows for session control

* fix: ignore stale rows in subagent admin kill

* fix: dedupe stale child completion announces

* fix: ignore stale rows in subagent steer

* fix: cascade bulk subagent kills past stale rows

* fix: address FootGun's PR #8 review — regenerate metadata + fix Zulip imports

1. Regenerated bundled-plugin-metadata.generated.ts (stale after upstream merge)
2. Fixed Zulip extension monolithic plugin-sdk imports:
   - OpenClawPluginApi → openclaw/plugin-sdk/plugin-entry
   - emptyPluginConfigSchema, PluginRuntime, OpenClawConfig → openclaw/plugin-sdk/core
   - ChannelAccountSnapshot inline imports → openclaw/plugin-sdk/zulip
3. Added ChannelAccountSnapshot re-export to src/plugin-sdk/zulip.ts

---------

Signed-off-by: HCL <[email protected]>
Signed-off-by: sallyom <[email protected]>
Co-authored-by: Devin Robison <[email protected]>
Co-authored-by: Peter Steinberger <[email protected]>
Co-authored-by: Val Alexander <[email protected]>
Co-authored-by: BunsDev <[email protected]>
Co-authored-by: Nova <[email protected]>
Co-authored-by: Rolfy <[email protected]>
Co-authored-by: Tak Hoffman <[email protected]>
Co-authored-by: Taras Lukavyi <[email protected]>
Co-authored-by: Ayaan Zaidi <[email protected]>
Co-authored-by: Vincent Koc <[email protected]>
Co-authored-by: sudie-codes <[email protected]>
Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>
Co-authored-by: giulio-leone <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: HCL <[email protected]>
Co-authored-by: Protocol-zero-0 <[email protected]>
Co-authored-by: Sid Uppal <[email protected]>
Co-authored-by: Catalin Lupuleti <[email protected]>
Co-authored-by: Tao Xie <[email protected]>
Co-authored-by: Tao Xie <[email protected]>
Co-authored-by: joelnishanth <[email protected]>
Co-authored-by: Mariano <[email protected]>
Co-authored-by: HollyChou <[email protected]>
Co-authored-by: altaywtf <[email protected]>
Co-authored-by: Neerav Makwana <[email protected]>
Co-authored-by: Sally O'Malley <[email protected]>
Co-authored-by: Harold Hunt <[email protected]>
Co-authored-by: huntharo <[email protected]>
Co-authored-by: David Guttman <[email protected]>
Co-authored-by: Onur Solmaz <[email protected]>
Co-authored-by: Onur Solmaz <[email protected]>
Co-authored-by: Han Pingtian <[email protected]>
Co-authored-by: Maho Pan <[email protected]>
Co-authored-by: Josh Lehman <[email protected]>
Co-authored-by: w-sss <[email protected]>
Co-authored-by: scoootscooob <[email protected]>
Co-authored-by: Bob <[email protected]>
Co-authored-by: VioGarden <[email protected]>
Co-authored-by: scoootscooob <[email protected]>
Co-authored-by: Devin Robison <[email protected]>
netandreus pushed a commit to netandreus/openclaw that referenced this pull request Mar 25, 2026
Migrates the Teams extension from @microsoft/agents-hosting to the official Teams SDK (@microsoft/teams.apps + @microsoft/teams.api) and implements Microsoft's AI UX best practices for Teams agents.

- AI-generated label on all bot messages (Teams native badge + thumbs up/down)
- Streaming responses in 1:1 chats via Teams streaminfo protocol
- Welcome card with configurable prompt starters on bot install
- Feedback with reflective learning (negative feedback triggers background reflection)
- Typing indicators for personal + group chats (disabled for channels)
- Informative status updates (progress bar while LLM processes)
- JWT validation via Teams SDK createServiceTokenValidator
- User-Agent: teams.ts[apps]/<sdk-version> OpenClaw/<version> on outbound requests
- Fix copy-pasted image downloads (smba.trafficmanager.net auth allowlist)
- Pre-parse auth gate (reject unauthenticated requests before body parsing)
- Reflection dispatcher lifecycle fix (prevent leaked dispatchers)
- Colon-safe session filenames (Windows compatibility)
- Cooldown cache eviction (prevent unbounded memory growth)

Closes openclaw#51806
npmisantosh pushed a commit to npmisantosh/openclaw that referenced this pull request Mar 25, 2026
Migrates the Teams extension from @microsoft/agents-hosting to the official Teams SDK (@microsoft/teams.apps + @microsoft/teams.api) and implements Microsoft's AI UX best practices for Teams agents.

- AI-generated label on all bot messages (Teams native badge + thumbs up/down)
- Streaming responses in 1:1 chats via Teams streaminfo protocol
- Welcome card with configurable prompt starters on bot install
- Feedback with reflective learning (negative feedback triggers background reflection)
- Typing indicators for personal + group chats (disabled for channels)
- Informative status updates (progress bar while LLM processes)
- JWT validation via Teams SDK createServiceTokenValidator
- User-Agent: teams.ts[apps]/<sdk-version> OpenClaw/<version> on outbound requests
- Fix copy-pasted image downloads (smba.trafficmanager.net auth allowlist)
- Pre-parse auth gate (reject unauthenticated requests before body parsing)
- Reflection dispatcher lifecycle fix (prevent leaked dispatchers)
- Colon-safe session filenames (Windows compatibility)
- Cooldown cache eviction (prevent unbounded memory growth)

Closes openclaw#51806
godlin-gh pushed a commit to YouMindInc/openclaw that referenced this pull request Mar 27, 2026
Migrates the Teams extension from @microsoft/agents-hosting to the official Teams SDK (@microsoft/teams.apps + @microsoft/teams.api) and implements Microsoft's AI UX best practices for Teams agents.

- AI-generated label on all bot messages (Teams native badge + thumbs up/down)
- Streaming responses in 1:1 chats via Teams streaminfo protocol
- Welcome card with configurable prompt starters on bot install
- Feedback with reflective learning (negative feedback triggers background reflection)
- Typing indicators for personal + group chats (disabled for channels)
- Informative status updates (progress bar while LLM processes)
- JWT validation via Teams SDK createServiceTokenValidator
- User-Agent: teams.ts[apps]/<sdk-version> OpenClaw/<version> on outbound requests
- Fix copy-pasted image downloads (smba.trafficmanager.net auth allowlist)
- Pre-parse auth gate (reject unauthenticated requests before body parsing)
- Reflection dispatcher lifecycle fix (prevent leaked dispatchers)
- Colon-safe session filenames (Windows compatibility)
- Cooldown cache eviction (prevent unbounded memory growth)

Closes openclaw#51806
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

channel: msteams Channel integration: msteams docs Improvements or additions to documentation maintainer Maintainer-authored PR scripts Repository scripts size: XL

Projects

None yet

Development

Successfully merging this pull request may close these issues.

msteams: implement Teams AI agent UX best practices

4 participants