Skip to content

fix(models): preserve stream usage compat opt-ins#45733

Merged
ademczuk merged 1 commit intoopenclaw:mainfrom
pezy:codex/preserve-stream-usage-compat-upstream
Mar 15, 2026
Merged

fix(models): preserve stream usage compat opt-ins#45733
ademczuk merged 1 commit intoopenclaw:mainfrom
pezy:codex/preserve-stream-usage-compat-upstream

Conversation

@pezy
Copy link
Copy Markdown
Contributor

@pezy pezy commented Mar 14, 2026

Summary

This fixes a model-compat policy bug for non-native openai-completions endpoints.

OpenClaw currently forces both supportsDeveloperRole and supportsUsageInStreaming to false for any non-native OpenAI-compatible base URL. That is too broad for streaming usage: built-in catalogs and user config can already declare explicit compat opt-ins, but the normalizer overwrites them.

Bailian/DashScope is a concrete repro. It can return streaming token usage, but OpenClaw suppresses stream_options.include_usage before the request is sent, so usage / lastCallUsage never shows up in the final result.

Root cause

This is not a Bailian-specific parser bug in this repo.

The issue is that normalizeModelCompat() treats streaming usage as a blanket "non-native OpenAI" incompatibility, even when a provider model or user config has already opted in explicitly. That makes the compat schema inconsistent: callers can set supportsUsageInStreaming, but the normalizer silently discards it.

There is one important constraint here: @mariozechner/pi-ai enables stream_options.include_usage unless compat.supportsUsageInStreaming === false. So removing the default-off behavior entirely would reopen unknown custom proxies by default, which would be too risky.

What this changes

This PR keeps the conservative default for unknown non-native endpoints, but preserves explicit streaming-usage opt-ins.

Specifically:

  • non-native openai-completions endpoints still force supportsDeveloperRole = false
  • supportsUsageInStreaming = false is only injected when the flag is unspecified
  • explicit supportsUsageInStreaming: true and false now survive normalization
  • built-in Moonshot and Model Studio catalogs now explicitly opt in to streaming usage support

This means known compatible providers can expose token usage, while unknown custom proxies stay on the current safe default.

Scope boundary

This PR does not change pi-ai parsing or retry behavior.

It also does not enable streaming usage for arbitrary custom OpenAI-compatible endpoints by default. Those remain opt-in.

Validation

Checks run from this branch:

  • CI=1 pnpm exec vitest run --maxWorkers=1 --reporter=verbose src/agents/model-compat.test.ts
  • CI=1 pnpm exec vitest run --maxWorkers=1 --reporter=verbose src/agents/models-config.providers.moonshot.test.ts
  • CI=1 pnpm exec vitest run --maxWorkers=1 --reporter=verbose src/agents/models-config.providers.modelstudio.test.ts -t 'should build the static Model Studio provider catalog'
  • pnpm build
  • pnpm check
  • codex review --base openclaw-upstream/main

Manual verification:

  • built the upstream checkout locally and ran a real Bailian-backed openclaw agent --local --json request
  • confirmed usage and lastCallUsage were present in the result
  • observed usage.input=16571, usage.output=76, usage.total=16647

This keeps the implementation generic while using Bailian as the motivating repro and validation case.

@openclaw-barnacle openclaw-barnacle bot added agents Agent runtime and tooling size: S labels Mar 14, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 14, 2026

Greptile Summary

This PR fixes a normalizeModelCompat bug where supportsUsageInStreaming was unconditionally overridden to false for all non-native openai-completions endpoints, preventing built-in catalog entries (Bailian/DashScope via ModelStudio, Moonshot) from ever surfacing token usage in streaming responses. The fix preserves explicit true/false flags for streaming usage while keeping the conservative default-off behavior for unconfigured endpoints.

Key changes:

  • supportsUsageInStreaming in normalizeModelCompat is now only forced to false when the flag is undefined; explicit true or false values are preserved as-is.
  • supportsDeveloperRole is still always forced to false for non-native endpoints — this is a silent breaking change for users who had previously set compat: { supportsDeveloperRole: true } in their custom endpoint configs. The old code respected that opt-in; the new code silently overrides it. While intentional and documented in the PR description, it may be worth calling out in release notes or a migration guide.
  • ModelStudio (buildModelStudioProvider) and Moonshot (buildMoonshotProvider) static catalogs now explicitly set supportsUsageInStreaming: true.
  • New test cases thoroughly cover the preserve-explicit-true, preserve-explicit-false, and mixed-flag scenarios.

Confidence Score: 4/5

  • Safe to merge; the streaming-usage fix is correct and well-tested, with one intentional but undocumented-externally breaking change to supportsDeveloperRole opt-in behavior.
  • The core logic for preserving supportsUsageInStreaming is correct, the early-exit condition is sound, and all edge cases are covered by new tests. The only concern is the silent removal of supportsDeveloperRole: true opt-in support for non-native endpoints — a behavioral regression for any user who had that set. This is intentional per the PR description but could surprise existing users without a changelog note.
  • src/agents/model-compat.ts — verify the intentional removal of supportsDeveloperRole: true opt-in is acceptable as a breaking change and that it's surfaced in release notes.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/agents/model-compat.ts
Line: 69-73

Comment:
**`supportsDeveloperRole: true` opt-in silently discarded**

The old code preserved an explicit `supportsDeveloperRole: true` from user/catalog config (the now-renamed test "respects explicit supportsDeveloperRole true…" asserted `.toBe(true)`). The new `needsDeveloperRoleOverride` condition is `compat?.supportsDeveloperRole !== false`, which is `true` whenever the flag is `true` *or* `undefined`, so an explicit `supportsDeveloperRole: true` in a user's model definition is now silently overridden to `false`.

This is an asymmetric treatment compared to `supportsUsageInStreaming`: streaming usage preserves both explicit `true` *and* explicit `false`, but developer-role only preserves explicit `false`. Any user who previously opted in with `compat: { supportsDeveloperRole: true }` will lose that capability without warning after this change.

If this is intentional (the comment says "force it off for non-native endpoints"), consider adding an explanatory note in the config schema docs or a console warning so users aren't surprised, and make sure the changelog/migration notes call this out explicitly.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: 7b50052

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7b500520f3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@pezy
Copy link
Copy Markdown
Contributor Author

pezy commented Mar 14, 2026

Addressed the review feedback in 85817c8.

  • restored the existing behavior of preserving explicit supportsDeveloperRole: true overrides on non-native openai-completions endpoints, so this PR stays focused on streaming-usage compat
  • moved built-in streaming-usage opt-ins out of the static provider catalogs and into a provider-side helper that only enables them for known native Moonshot / Model Studio baseUrls
  • added regression coverage for native-vs-custom baseUrl behavior so custom OpenAI-compatible proxies stay on the conservative default

Validation rerun after the update:

  • CI=1 pnpm exec vitest run --maxWorkers=1 --reporter=verbose src/agents/model-compat.test.ts src/agents/models-config.providers.moonshot.test.ts src/agents/models-config.providers.modelstudio.test.ts
  • pnpm check
  • pnpm build

@pezy
Copy link
Copy Markdown
Contributor Author

pezy commented Mar 14, 2026

Addressed the review feedback in 85817c8.\n\n- restored the existing behavior of preserving explicit \ overrides on non-native \ endpoints, so this PR stays focused on streaming-usage compat\n- moved built-in streaming-usage opt-ins out of the static provider catalogs and into a provider-side helper that only enables them for known native Moonshot / Model Studio baseUrls\n- added regression coverage for native-vs-custom baseUrl behavior so custom OpenAI-compatible proxies stay on the conservative default\n\nValidation rerun after the update:\n-
RUN v4.1.0 /Users/chenpeizhe/Vita/openclaw-upstream-pr

✓ src/agents/models-config.providers.moonshot.test.ts > moonshot implicit provider (#33637) > uses explicit CN baseUrl when provided 25ms
✓ src/agents/models-config.providers.moonshot.test.ts > moonshot implicit provider (#33637) > keeps streaming usage opt-in disabled for custom Moonshot-compatible baseUrls 1ms
✓ src/agents/models-config.providers.moonshot.test.ts > moonshot implicit provider (#33637) > defaults to .ai baseUrl when no explicit provider 1ms
✓ src/agents/models-config.providers.modelstudio.test.ts > Model Studio implicit provider > should opt native Model Studio baseUrls into streaming usage 1ms
✓ src/agents/models-config.providers.modelstudio.test.ts > Model Studio implicit provider > should keep streaming usage opt-in disabled for custom Model Studio-compatible baseUrls 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat — Anthropic baseUrl > strips /v1 suffix from anthropic-messages baseUrl 1ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat — Anthropic baseUrl > strips trailing /v1/ (with slash) from anthropic-messages baseUrl 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat — Anthropic baseUrl > leaves anthropic-messages baseUrl without /v1 unchanged 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat — Anthropic baseUrl > leaves baseUrl undefined unchanged for anthropic-messages 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat — Anthropic baseUrl > does not strip /v1 from non-anthropic-messages models 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat — Anthropic baseUrl > strips /v1 from custom Anthropic proxy baseUrl 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for z.ai models 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for moonshot models 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for custom moonshot-compatible endpoints 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for DashScope provider ids 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for DashScope-compatible endpoints 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > leaves native api.openai.com model untouched 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for Azure OpenAI (Chat Completions, not Responses API) 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for generic custom openai-completions provider 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsUsageInStreaming off for generic custom openai-completions provider 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for Qwen proxy via openai-completions 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > leaves openai-completions model with empty baseUrl untouched 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > forces supportsDeveloperRole off for malformed baseUrl values 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > respects explicit supportsDeveloperRole true on non-native endpoints 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > preserves explicit supportsUsageInStreaming true on non-native endpoints 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > preserves explicit supportsUsageInStreaming false on non-native endpoints 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > still forces flags off when not explicitly set by user 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > does not mutate caller model when forcing supportsDeveloperRole off 1ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > does not override explicit compat false 0ms
✓ src/agents/model-compat.test.ts > normalizeModelCompat > preserves explicit usage compat when developer role is explicitly enabled 0ms
✓ src/agents/model-compat.test.ts > isModernModelRef > includes OpenAI gpt-5.4 variants in modern selection 0ms
✓ src/agents/model-compat.test.ts > isModernModelRef > excludes opencode minimax variants from modern selection 0ms
✓ src/agents/model-compat.test.ts > isModernModelRef > keeps non-minimax opencode modern models 0ms
✓ src/agents/model-compat.test.ts > isModernModelRef > accepts all opencode-go models without zen exclusions 0ms
✓ src/agents/model-compat.test.ts > resolveForwardCompatModel > resolves openai gpt-5.4 via gpt-5.2 template 0ms
✓ src/agents/model-compat.test.ts > resolveForwardCompatModel > resolves openai gpt-5.4 without templates using normalized fallback defaults 0ms
✓ src/agents/model-compat.test.ts > resolveForwardCompatModel > resolves openai gpt-5.4-pro via template fallback 0ms
✓ src/agents/model-compat.test.ts > resolveForwardCompatModel > resolves openai-codex gpt-5.4 via codex template fallback 0ms
✓ src/agents/model-compat.test.ts > resolveForwardCompatModel > resolves anthropic opus 4.6 via 4.5 template 0ms
✓ src/agents/model-compat.test.ts > resolveForwardCompatModel > resolves anthropic sonnet 4.6 dot variant with suffix 0ms
✓ src/agents/model-compat.test.ts > resolveForwardCompatModel > does not resolve anthropic 4.6 fallback for other providers 0ms

Test Files 3 passed (3)
Tests 41 passed (41)
Start at 14:20:11
Duration 8.73s (transform 3.33s, setup 222ms, import 8.27s, tests 39ms, environment 0ms)\n- \

[email protected] check /Users/chenpeizhe/Vita/openclaw-upstream-pr
pnpm check:host-env-policy:swift && pnpm format:check && pnpm tsgo && pnpm lint && pnpm lint:tmp:no-random-messaging && pnpm lint:tmp:channel-agnostic-boundaries && pnpm lint:tmp:no-raw-channel-fetch && pnpm lint:agent:ingress-owner && pnpm lint:plugins:no-register-http-handler && pnpm lint:plugins:no-monolithic-plugin-sdk-entry-imports && pnpm lint:webhook:no-low-level-body-read && pnpm lint:auth:no-pairing-store-group && pnpm lint:auth:pairing-account-scope

[email protected] check:host-env-policy:swift /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/generate-host-env-security-policy-swift.mjs --check

OK apps/macos/Sources/OpenClaw/HostEnvSecurityPolicy.generated.swift

[email protected] format:check /Users/chenpeizhe/Vita/openclaw-upstream-pr
oxfmt --check

Checking formatting...

All matched files use the correct format.
Finished in 5172ms on 7167 files using 10 threads.

[email protected] lint /Users/chenpeizhe/Vita/openclaw-upstream-pr
oxlint --type-aware

Found 0 warnings and 0 errors.
Finished in 6.9s on 5376 files with 136 rules using 10 threads.

[email protected] lint:tmp:no-random-messaging /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-no-random-messaging-tmp.mjs

[email protected] lint:tmp:channel-agnostic-boundaries /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-channel-agnostic-boundaries.mjs

[email protected] lint:tmp:no-raw-channel-fetch /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-no-raw-channel-fetch.mjs

[email protected] lint:agent:ingress-owner /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-ingress-agent-owner-context.mjs

[email protected] lint:plugins:no-register-http-handler /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-no-register-http-handler.mjs

[email protected] lint:plugins:no-monolithic-plugin-sdk-entry-imports /Users/chenpeizhe/Vita/openclaw-upstream-pr
node --import tsx scripts/check-no-monolithic-plugin-sdk-entry-imports.ts

OK: bundled plugin source files use scoped plugin-sdk subpaths (790 checked).

[email protected] lint:webhook:no-low-level-body-read /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-webhook-auth-body-order.mjs

[email protected] lint:auth:no-pairing-store-group /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-no-pairing-store-group-auth.mjs

[email protected] lint:auth:pairing-account-scope /Users/chenpeizhe/Vita/openclaw-upstream-pr
node scripts/check-pairing-account-scope.mjs\n-
[email protected] build /Users/chenpeizhe/Vita/openclaw-upstream-pr
pnpm canvas:a2ui:bundle && node scripts/tsdown-build.mjs && node scripts/copy-plugin-sdk-root-alias.mjs && pnpm build:plugin-sdk:dts && node --import tsx scripts/write-plugin-sdk-entry-dts.ts && node --import tsx scripts/canvas-a2ui-copy.ts && node --import tsx scripts/copy-hook-metadata.ts && node --import tsx scripts/copy-export-html-templates.ts && node --import tsx scripts/write-build-info.ts && node --import tsx scripts/write-cli-startup-metadata.ts && node --import tsx scripts/write-cli-compat.ts

[email protected] canvas:a2ui:bundle /Users/chenpeizhe/Vita/openclaw-upstream-pr
bash scripts/bundle-a2ui.sh

A2UI bundle up to date; skipping.

[email protected] build:plugin-sdk:dts /Users/chenpeizhe/Vita/openclaw-upstream-pr
tsc -p tsconfig.plugin-sdk.dts.json

[copy-hook-metadata] Copied 4 hook metadata files.
[copy-export-html-templates] Copied 5 export-html assets.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 85817c864f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@ademczuk
Copy link
Copy Markdown
Contributor

Reviewed the diff and the approach is solid. The split between preserving explicit compat overrides in model-compat.ts and applying per-provider native URL opt-ins via applyNativeStreamingUsageCompat is clean.

One blocker: there's a rebase conflict in model-compat.ts. Upstream added supportsStrictMode as a third compat flag since your last push, which changes the early-return condition and the compat object shape. The conflict touches the same lines your PR modifies (the forcedDeveloperRole/needsDeveloperRoleOverride logic and the compat spread).

Could you rebase onto current main and integrate the supportsStrictMode handling? The upstream changes are in the same function (normalizeModelCompat) around lines 65-87. The strictMode flag follows the same pattern - force it off for non-native endpoints unless explicitly set.

Happy to re-review once rebased.

Copy link
Copy Markdown
Contributor

@ademczuk ademczuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approach looks correct. The model-compat.ts change properly preserves explicit streaming usage overrides (both true and false) without changing behavior for unset cases. The applyNativeStreamingUsageCompat function is scoped to known-good native URLs for moonshot and modelstudio only, applied at the right stage in the plan pipeline.

Approving the approach. Needs rebase to resolve the supportsStrictMode conflict before merge.

@ademczuk
Copy link
Copy Markdown
Contributor

Hey @pezy - this needs a rebase after #45497 landed (adds supportsStrictMode to model-compat.ts).

The conflict is in normalizeModelCompat() - three spots where supportsStrictMode was added alongside the existing flags. Resolution is straightforward:

  1. Early-return condition: add compat?.supportsStrictMode !== undefined to the existing check alongside your hasStreamingUsageOverride
  2. Spread path (compat truthy): add supportsStrictMode: targetStrictMode alongside your ...(hasStreamingUsageOverride ? {} : { supportsUsageInStreaming: false })
  3. Spread path (compat falsy): add supportsStrictMode: false

I pushed a rebased version to ademczuk/openclaw:pezy-45733-rebased if you want to cherry-pick or reference it. All tests pass (45/45 model-compat + modelstudio + moonshot tests green).

Happy to help get this landed - the approach is solid and I'd like to close out the competing PRs in this area.

@pezy pezy force-pushed the codex/preserve-stream-usage-compat-upstream branch from 1e6bcb9 to 41fd398 Compare March 15, 2026 15:45
@pezy
Copy link
Copy Markdown
Contributor Author

pezy commented Mar 15, 2026

Thanks again @ademczuk — your rebase notes and reference branch were very helpful.

I wanted to follow up on how I handled this. I did not directly cherry-pick ademczuk/openclaw:pezy-45733-rebased, but I did use your guidance as the source of truth for the conflict resolution:

  • rebased the PR work onto the current main
  • integrated supportsStrictMode into normalizeModelCompat()
  • kept the native Moonshot / Model Studio streaming-usage opt-ins in the final provider-side compat pass after baseUrl resolution

I treated your branch as a reference implementation for the intended resolution, while reapplying the PR on top of a newer main baseline. So the goal was to fully respect the substance of your suggestion, rather than copy an older base commit-for-commit.

I also reran:

  • pnpm test -- src/agents/model-compat.test.ts src/agents/models-config.providers.moonshot.test.ts src/agents/models-config.providers.modelstudio.test.ts
  • pnpm check
  • pnpm build

Really appreciate you pushing the rebased branch and spelling out the three conflict points — that made it much easier to land this cleanly.

@pezy pezy force-pushed the codex/preserve-stream-usage-compat-upstream branch from ed225f1 to 0994eab Compare March 15, 2026 16:12
@ademczuk ademczuk merged commit 42837a0 into openclaw:main Mar 15, 2026
31 of 32 checks passed
romeroej2 pushed a commit to romeroej2/openclaw that referenced this pull request Mar 16, 2026
Preserves explicit `supportsUsageInStreaming` overrides from built-in provider
catalogs and user config instead of unconditionally forcing `false` on non-native
openai-completions endpoints.

Adds `applyNativeStreamingUsageCompat()` to set `supportsUsageInStreaming: true`
on ModelStudio (DashScope) and Moonshot models at config build time so their
native streaming usage works out of the box.

Closes openclaw#46142

Co-authored-by: pezy <[email protected]>
guiramos added a commit to butley/openclaw that referenced this pull request Mar 22, 2026
* feat: make compaction timeout configurable via agents.defaults.compaction.timeoutSeconds (openclaw#46889)

* feat: make compaction timeout configurable via agents.defaults.compaction.timeoutSeconds

The hardcoded 5-minute (300s) compaction timeout causes large sessions
to enter a death spiral where compaction repeatedly fails and the
session grows indefinitely. This adds agents.defaults.compaction.timeoutSeconds
to allow operators to override the compaction safety timeout.

Default raised to 900s (15min) which is sufficient for sessions up to
~400k tokens. The resolved timeout is also used for the session write
lock duration so locks don't expire before compaction completes.

Fixes openclaw#38233

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>

* test: add resolveCompactionTimeoutMs tests

Cover config resolution edge cases: undefined config, missing
compaction section, valid seconds, fractional values, zero,
negative, NaN, and Infinity.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>

* fix: add timeoutSeconds to compaction Zod schema

The compaction object schema uses .strict(), so setting the new
timeoutSeconds config option would fail validation at startup.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>

* fix: enforce integer constraint on compaction timeoutSeconds schema

Prevents sub-second values like 0.5 which would floor to 0ms and
cause immediate compaction timeout. Matches pattern of other
integer timeout fields in the schema.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>

* fix: clamp compaction timeout to Node timer-safe maximum

Values above ~2.1B ms overflow Node's setTimeout to 1ms, causing
immediate timeout. Clamp to MAX_SAFE_TIMEOUT_MS matching the
pattern in agents/timeout.ts.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>

* fix: add FIELD_LABELS entry for compaction timeoutSeconds

Maintains label/help parity invariant enforced by
schema.help.quality.test.ts.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>

* fix: align compaction timeouts with abort handling

* fix: land compaction timeout handling (openclaw#46889) (thanks @asyncjason)

---------

Co-authored-by: Jason Separovic <[email protected]>
Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>
Co-authored-by: Ayaan Zaidi <[email protected]>

* fix: harden compaction timeout follow-ups

* Docs: fix stale Clawdbot branding in agent workflow file (openclaw#46963)

Co-authored-by: webdevpraveen <[email protected]>

* docs: replace outdated Clawdbot references with OpenClaw in skill docs (openclaw#41563)

Update 5 references to the old "Clawdbot" name in
skills/apple-reminders/SKILL.md and skills/imsg/SKILL.md.

Co-authored-by: imanisynapse <[email protected]>

* Docs: switch README logo to SVG assets (openclaw#47049)

* fix: Disable strict mode tools for non-native openai-completions compatible APIs (openclaw#45497)

Merged via squash.

Prepared head SHA: 20fe05f
Co-authored-by: sahancava <[email protected]>
Co-authored-by: frankekn <[email protected]>
Reviewed-by: @frankekn

* fix: forward forceDocument through sendPayload path (follow-up to openclaw#45111) (openclaw#47119)

Merged via squash.

Prepared head SHA: d791190
Co-authored-by: thepagent <[email protected]>
Reviewed-by: @frankekn

* fix(android): support android node  `calllog.search` (openclaw#44073)

* fix(android): support android node  `calllog.search`

* fix(android): support android node calllog.search

* fix(android): wire callLog through shared surfaces

* fix: land Android callLog support (openclaw#44073) (thanks @lxk7280)

---------

Co-authored-by: lixuankai <[email protected]>
Co-authored-by: Ayaan Zaidi <[email protected]>

* fix(whatsapp): restore append recency filter lost in extensions refactor, handle Long timestamps (openclaw#42588)

Merged via squash.

Prepared head SHA: 8ce59bb
Co-authored-by: MonkeyLeeT <[email protected]>
Co-authored-by: scoootscooob <[email protected]>
Reviewed-by: @scoootscooob

* fix(web): handle 515 Stream Error during WhatsApp QR pairing (openclaw#27910)

* fix(web): handle 515 Stream Error during WhatsApp QR pairing

getStatusCode() never unwrapped the lastDisconnect wrapper object,
so login.errorStatus was always undefined and the 515 restart path
in restartLoginSocket was dead code.

- Add err.error?.output?.statusCode fallback to getStatusCode()
- Export waitForCredsSaveQueue() so callers can await pending creds
- Await creds flush in restartLoginSocket before creating new socket

Fixes openclaw#3942

* test: update session mock for getStatusCode unwrap + waitForCredsSaveQueue

Mirror the getStatusCode fix (err.error?.output?.statusCode fallback)
in the test mock and export waitForCredsSaveQueue so restartLoginSocket
tests work correctly.

* fix(web): scope creds save queue per-authDir to avoid cross-account blocking

The credential save queue was a single global promise chain shared by all
WhatsApp accounts. In multi-account setups, a slow save on one account
blocked credential writes and 515 restart recovery for unrelated accounts.

Replace the global queue with a per-authDir Map so each account's creds
serialize independently. waitForCredsSaveQueue() now accepts an optional
authDir to wait on a single account's queue, or waits on all when omitted.

Co-Authored-By: Claude Opus 4.6 <[email protected]>

* test: use real Baileys v7 error shape in 515 restart test

The test was using { output: { statusCode: 515 } } which was already
handled before the fix. Updated to use the actual Baileys v7 shape
{ error: { output: { statusCode: 515 } } } to cover the new fallback
path in getStatusCode.

Co-Authored-By: Claude Code (Opus 4.6) <[email protected]>

* fix(web): bound credential-queue wait during 515 restart

Prevents restartLoginSocket from blocking indefinitely if a queued
saveCreds() promise stalls (e.g. hung filesystem write).

Co-Authored-By: Claude <[email protected]>

* fix: clear flush timeout handle and assert creds queue in test

Co-Authored-By: Claude <[email protected]>

* fix: evict settled credsSaveQueues entries to prevent unbounded growth

Co-Authored-By: Claude <[email protected]>

* fix: share WhatsApp 515 creds flush handling (openclaw#27910) (thanks @asyncjason)

---------

Co-authored-by: Jason Separovic <[email protected]>
Co-authored-by: Claude Opus 4.6 <[email protected]>
Co-authored-by: Ayaan Zaidi <[email protected]>

* Deduplicate repeated tool call IDs for OpenAI-compatible APIs (openclaw#40996)

Merged via squash.

Prepared head SHA: 38d8048
Co-authored-by: xaeon2026 <[email protected]>
Co-authored-by: frankekn <[email protected]>
Reviewed-by: @frankekn

* fix(gateway): skip Control UI pairing when auth.mode=none (closes openclaw#42931) (openclaw#47148)

When auth is completely disabled (mode=none), requiring device pairing
for Control UI operator sessions adds friction without security value
since any client can already connect without credentials.

Add authMode parameter to shouldSkipControlUiPairing so the bypass
fires only for Control UI + operator role + auth.mode=none. This avoids
the openclaw#43478 regression where a top-level OR disabled pairing for ALL
websocket clients.

* fix: preserve Telegram word boundaries when rechunking HTML (openclaw#47274)

* fix: preserve Telegram chunk word boundaries

* fix: address Telegram chunking review feedback

* fix: preserve Telegram retry separators

* fix: preserve Telegram chunking boundaries (openclaw#47274)

* test(whatsapp): fix stale append inbox expectation

* chore(gateway): ignore `.test.ts` changes in `gateway:watch` (openclaw#36211)

* fix: harden remote cdp probes

* feat(feishu): add ACP and subagent session binding (openclaw#46819)

* feat(feishu): add ACP session support

* fix(feishu): preserve sender-scoped ACP rebinding

* fix(feishu): recover sender scope from bound ACP sessions

* fix(feishu): support DM ACP binding placement

* feat(feishu): add current-conversation session binding

* fix(feishu): avoid DM parent binding fallback

* fix(feishu): require canonical topic sender ids

* fix(feishu): honor sender-scoped ACP bindings

* fix(feishu): allow user-id ACP DM bindings

* fix(feishu): recover user-id ACP DM bindings

* ACP: fail closed on conflicting tool identity hints (openclaw#46817)

* ACP: fail closed on conflicting tool identity hints

* ACP: restore rawInput fallback for safe tool resolution

* ACP tests: cover rawInput-only safe tool approval

* fix: harden mention pattern regex compilation

* Nodes: recheck queued actions before delivery (openclaw#46815)

* Nodes: recheck queued actions before delivery

* Nodes tests: cover pull-time policy recheck

* Nodes tests: type node policy mocks explicitly

* refactor: drop deprecated whatsapp mention pattern sdk helper

* added a fix for memory leak on 2gb ram (openclaw#46522)

* Nodes tests: prove pull-time policy revalidation

* fix: harden device token rotation denial paths

* style: format imported model helpers

* Plugins: preserve scoped ids and reserve bundled duplicates (openclaw#47413)

* Plugins: preserve scoped ids and reserve bundled duplicates

* Changelog: add plugin scoped id note

* Plugins: harden scoped install ids

* Plugins: reserve scoped install dirs

* Plugins: migrate legacy scoped update ids

* CLI: reduce channels add startup memory (openclaw#46784)

* CLI: lazy-load channel subcommand handlers

* Channels: defer add command dependencies

* CLI: skip status JSON plugin preload

* CLI: cover status JSON route preload

* Status: trim JSON security audit path

* Status: update JSON fast-path tests

* CLI: cover root help fast path

* CLI: fast-path root help

* Status: keep JSON security parity

* Status: restore JSON security tests

* CLI: document status plugin preload

* Channels: reuse Telegram account import

* Integrations: tighten inbound callback and allowlist checks (openclaw#46787)

* Integrations: harden inbound callback and allowlist handling

* Integrations: address review follow-ups

* Update CHANGELOG.md

* Mattermost: avoid command-gating open button callbacks

* ACP: require admin scope for mutating internal actions (openclaw#46789)

* ACP: require admin scope for mutating internal actions

* ACP: cover operator admin mutating actions

* ACP: gate internal status behind admin scope

* Changelog: add missing PR credits

* Changelog: add more unreleased PR numbers

* Subagents: restrict follow-up messaging scope (openclaw#46801)

* Subagents: restrict follow-up messaging scope

* Subagents: cover foreign-session follow-up sends

* Update CHANGELOG.md

* Webhooks: tighten pre-auth body handling (openclaw#46802)

* Webhooks: tighten pre-auth body handling

* Webhooks: clean up request body guards

* Tools: revalidate workspace-only patch targets (openclaw#46803)

* Tools: revalidate workspace-only patch targets

* Tests: narrow apply-patch delete-path assertion

* CLI: trim onboarding provider startup imports (openclaw#47467)

* Scope Control UI sessions per gateway (openclaw#47453)

* Scope Control UI sessions per gateway

Signed-off-by: sallyom <[email protected]>

* Add changelog for Control UI session scoping

Signed-off-by: sallyom <[email protected]>

---------

Signed-off-by: sallyom <[email protected]>

* Gateway: scrub credentials from endpoint snapshots (openclaw#46799)

* Gateway: scrub credentials from endpoint snapshots

* Gateway: scrub raw endpoint credentials in snapshots

* Gateway: preserve config redaction round-trips

* Gateway: restore redacted endpoint URLs on apply

* fix(config): avoid failing startup on implicit memory slot (openclaw#47494)

* fix(config): avoid failing on implicit memory slot

* fix(config): satisfy build for memory slot guard

* docs(changelog): note implicit memory slot startup fix (openclaw#47494)

* CLI: lazy-load auth choice provider fallback (openclaw#47495)

* CLI: lazy-load auth choice provider fallback

* CLI: cover lazy auth choice provider fallback

* fix(ci): config drift found and documented

* Gateway: tighten forwarded client and pairing guards (openclaw#46800)

* Gateway: tighten forwarded client and pairing guards

* Gateway: make device approval scope checks atomic

* Gateway: preserve device approval baseDir compatibility

* Changelog: note CLI OOM startup fixes (openclaw#47525)

* Commands: lazy-load model picker provider runtime (openclaw#47536)

* Commands: lazy-load model picker provider runtime

* Tests: cover model picker runtime boundary

* docs: fork rebase spec + per-patch diffs for upstream v2026.3.13 merge

Generated after failed merge attempt (2026-03-15). Contains:
- FORK-PATCHES-SPEC.md: implementation instructions per patch group (249 lines)
- FORK-REBASE-SPEC.md: technical context, errors, SSE protocol (292 lines)
- fork-patches/by-patch/: 31 per-patch git diffs (consultable on demand)
- fork-patches/fork-vs-upstream-src-only.patch: full squashed diff (5813 lines)

Co-authored-by: Bob

* docs: add merge plan from feat/upstream-merge-3.13 branch

Co-authored-by: Bob

* docs: remove old merge plan — superseded by FORK-PATCHES-SPEC + FORK-REBASE-SPEC

Co-authored-by: Bob

* chore(fmt): format changes and broken types

* Commands: split static onboard auth choice help (openclaw#47545)

* Commands: split static onboard auth choice help

* Tests: cover static onboard auth choice help

* Changelog: note static onboard auth choice help

* CLI/completion: fix generator OOM and harden plugin registries (openclaw#45537)

* fix: avoid OOM during completion script generation

* CLI/completion: fix PowerShell nested command paths

* CLI/completion: cover generated shell scripts

* Changelog: note completion generator follow-up

* Plugins: reserve shared registry names

---------

Co-authored-by: Xiaoyi <[email protected]>
Co-authored-by: Vincent Koc <[email protected]>

* fix(plugins): load bundled extensions from dist (openclaw#47560)

* fix(models): preserve stream usage compat opt-ins (openclaw#45733)

Preserves explicit `supportsUsageInStreaming` overrides from built-in provider
catalogs and user config instead of unconditionally forcing `false` on non-native
openai-completions endpoints.

Adds `applyNativeStreamingUsageCompat()` to set `supportsUsageInStreaming: true`
on ModelStudio (DashScope) and Moonshot models at config build time so their
native streaming usage works out of the box.

Closes openclaw#46142

Co-authored-by: pezy <[email protected]>

* Plugins: reserve context engine ownership

* docs(zalo): document current Marketplace bot behavior (openclaw#47552)

Verified:
- pnpm check:docs

Co-authored-by: Tomáš Dinh <[email protected]>
Co-authored-by: Tak Hoffman <[email protected]>

* Docs: move release runbook to maintainer repo (openclaw#47532)

* Docs: redact private release setup

* Docs: tighten release order

* Docs: move release runbook to maintainer repo

* Docs: delete public mac release page

* Docs: remove zh-CN mac release page

* Docs: turn release checklist into release policy

* Docs: point release policy to private docs

* Docs: regenerate zh-CN release policy pages

* Docs: preserve Doctor in zh-CN hubs

* Docs: fix zh-CN polls label

* Docs: tighten docs i18n term guardrails

* Docs: enforce zh-CN glossary coverage

* Scripts: rebuild on extension and tsdown config changes (openclaw#47571)

Merged via squash.

Prepared head SHA: edd8ed8
Co-authored-by: gumadeiras <[email protected]>
Co-authored-by: gumadeiras <[email protected]>
Reviewed-by: @gumadeiras

* fix: reset chat buffer on tool-start to prevent intermediary text accumulation

The Pi SDK resets lastStreamedAssistantCleaned between tool calls, but
the gateway chatRunState.buffers was not reset — causing mergedText to
accumulate text from ALL prior turns. The SSE subscriber (which resets
lastTextLen=0 on tool-start) then re-emitted the entire conversation.

Co-authored-by: Bob

* fix(release): block oversized npm packs that regress low-memory startup (openclaw#46850)

* fix(release): guard npm pack size regressions

* fix(release): fail closed when npm omits pack size

* Plugins: reserve context engine ownership (openclaw#47595)

* Plugins: reserve context engine ownership

* Update src/context-engine/registry.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: restore Patch #4 (chat mirror) and Patch #5 (inbound push) lost in Codex merge

Patch #4: webchat-originated replies on WA-scoped sessions now mirror
to WhatsApp via sendMessageWhatsApp(). The Codex merge kept the
runContext.mirror registration but lost the delivery block.

Patch #5: inbound messages (WA/Slack/etc.) now broadcast to WS/SSE
clients via message.inbound event, restoring real-time cross-channel
message display in the webchat dashboard.

Co-authored-by: Bob

* fix: mirror delivery in emitChatFinal (embedded runner path)

The previous commit placed mirror only in the !agentRunStarted branch
of server-methods/chat.ts, but the embedded runner sets agentRunStarted=true
and delivers via emitChatFinal in server-chat.ts instead. This restores
the mirror block in the correct location — matching the alpha.

Co-authored-by: Bob

* Gateway: sync runtime post-build artifacts

* Plugins: harden context engine ownership

* fix: complete Patch #5 inbound push + fix mirror static import

- Add emitInboundMessageEvent() call in dispatch-from-config.ts (was
  only defined but never called — WA messages never reached SSE/webchat)
- Switch mirror from dynamic import() to static import (dynamic import
  failed silently in bundled build)

Co-authored-by: Bob

* fix: globalThis singleton for WA listeners to survive chunk duplication

The bundler splits active-listener.ts into a different chunk than
server-chat.ts (mirror) and auto-reply/monitor.ts (listener registration).
Static/dynamic imports resolve to different module instances, so mirror
always sees an empty listeners Map. Using globalThis ensures all chunks
share the same Map instance.

Co-authored-by: Bob

* fix(plugins): fix bundled plugin roots and skill assets (openclaw#47601)

* fix(acpx): resolve bundled plugin root correctly

* fix(plugins): copy bundled plugin skill assets

* fix(plugins): tolerate missing bundled skill paths

* chore: remove temporary mirror debug logs

Co-authored-by: Bob

* fix: globalThis singleton for inbound event listeners (chunk duplication)

Same root cause as the WA listener fix: dispatch-from-config.ts emits
inbound events in one chunk, server.impl.ts subscribes in another.
Module-level Set gets duplicated across chunks.

Co-authored-by: Bob

* fix: emit inbound events from WA process-message path (not dispatch-from-config)

dispatch-from-config.ts is NOT in the WA message processing chain.
WA messages go through process-message.ts → provider-dispatcher.ts.
Moved emitInboundMessageEvent to process-message.ts where WA messages
are actually processed.

Co-authored-by: Bob

* fix(ci): restore config baseline release-check output (openclaw#47629)

* Docs: regenerate config baseline

* Chore: ignore generated config baseline

* Update .prettierignore

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* CLI: support package-manager installs from GitHub main (openclaw#47630)

* CLI: resolve package-manager main install specs

* CLI: skip registry resolution for raw package specs

* CLI: support main package target updates

* CLI: document package update specs in help

* Tests: cover package install spec resolution

* Tests: cover npm main-package updates

* Tests: cover update --tag main

* Installer: support main package targets

* Installer: support main package targets on Windows

* Docs: document package-manager main updates

* Docs: document installer main targets

* Docs: document npm and pnpm main installs

* Docs: document update --tag main

* Changelog: note package-manager main installs

* Update src/infra/update-global.test.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: emit message.inbound directly on gatewayEventBus

Bypass inbound-events.ts entirely — its module-level Set suffers from
chunk duplication even with globalThis (timing/ordering issues).
gatewayEventBus already uses globalThis singleton and is proven to work
for chat/agent events. SSE listens on gatewayEventBus for message.inbound.

Co-authored-by: Bob

* fix(dev): align gateway watch with tsdown wrapper (openclaw#47636)

* Commands: lazy-load non-interactive plugin provider runtime (openclaw#47593)

* Commands: lazy-load non-interactive plugin provider runtime

* Tests: cover non-interactive plugin provider ordering

* Update src/commands/onboard-non-interactive/local/auth-choice.plugin-providers.runtime.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Plugins: relocate bundled skill assets

* Plugins: skip nested node_modules in bundled skills

* Plugins: clean stale bundled skill outputs

* feat(plugins): move provider runtimes into bundled plugins

* build(plugins): add bundled provider plugin manifests

* Channels: move onboarding adapters into extensions

* Channels: use owned helper imports

* Plugins: broaden plugin surface for Codex App Server (openclaw#45318)

* Plugins: add inbound claim and Telegram interaction seams

* Plugins: add Discord interaction surface

* Chore: fix formatting after plugin rebase

* fix(hooks): preserve observers after inbound claim

* test(hooks): cover claimed inbound observer delivery

* fix(plugins): harden typing lease refreshes

* fix(discord): pass real auth to plugin interactions

* fix(plugins): remove raw session binding runtime exposure

* fix(plugins): tighten interactive callback handling

* Plugins: gate conversation binding with approvals

* Plugins: migrate legacy plugin binding records

* Plugins/phone-control: update test command context

* Plugins: migrate legacy binding ids

* Plugins: migrate legacy codex session bindings

* Discord: fix plugin interaction handling

* Discord: support direct plugin conversation binds

* Plugins: preserve Discord command bind targets

* Tests: fix plugin binding and interactive fallout

* Discord: stabilize directory lookup tests

* Discord: route bound DMs to plugins

* Discord: restore plugin bindings after restart

* Telegram: persist detached plugin bindings

* Plugins: limit binding APIs to Telegram and Discord

* Plugins: harden bound conversation routing

* Plugins: fix extension target imports

* Plugins: fix Telegram runtime extension imports

* Plugins: format rebased binding handlers

* Discord: bind group DM interactions by channel

---------

Co-authored-by: Vincent Koc <[email protected]>

* feat(plugins): add compatible bundle support

* feat(plugins): move provider runtimes into bundled plugins

* build(plugins): add bundled provider plugin packages

* fix(plugins): restore provider compatibility fallbacks

* Changelog: note plugin agent integrations

* chore: remove inbound-push debug logs

Co-authored-by: Bob

* refactor: decouple channel setup discovery

* refactor: move telegram onboarding to setup wizard

* docs: describe channel setup wizard surface

* fix: tighten setup wizard typing

* fix: deduplicate inbound events + use raw body instead of envelope

- Remove emitInboundMessageEvent from dispatch-from-config.ts (WA uses
  process-message.ts path, causing double emit)
- Use params.msg.body (clean) instead of combinedBody (with envelope
  prefix) to avoid showing [WhatsApp ...] metadata in chat UI

Co-authored-by: Bob

* Commands: lazy-load auth choice plugin provider runtime (openclaw#47692)

* Commands: lazy-load auth choice plugin provider runtime

* Tests: cover auth choice plugin provider runtime

* refactor: expand setup wizard flow

* refactor: move discord and slack to setup wizard

* refactor: drop onboarding adapter sdk exports

* docs: update setup wizard capabilities

* feat(plugins): test bundle MCP end to end

* fix(onboarding): use scoped plugin snapshots to prevent OOM on low-memory hosts (openclaw#46763)

* fix(onboarding): use scoped plugin snapshots to prevent OOM on low-memory hosts

Onboarding and channel-add flows previously loaded the full plugin registry,
which caused OOM crashes on memory-constrained hosts. This patch introduces
scoped, non-activating plugin registry snapshots that load only the selected
channel plugin without replacing the running gateway's global state.

Key changes:
- Add onlyPluginIds and activate options to loadOpenClawPlugins for scoped loads
- Add suppressGlobalCommands to plugin registry to avoid leaking commands
- Replace full registry reloads in onboarding with per-channel scoped snapshots
- Validate command definitions in snapshot loads without writing global registry
- Preload configured external plugins via scoped discovery during onboarding

Co-Authored-By: Claude Opus 4.6 <[email protected]>

* fix(test): add return type annotation to hoisted mock to resolve TS2322

* fix(plugins): enforce cache:false invariant for non-activating snapshot loads

* Channels: preserve lazy scoped snapshot import after rebase

* Onboarding: scope channel snapshots by plugin id

* Catalog: trust manifest ids for channel plugin mapping

* Onboarding: preserve scoped setup channel loading

* Onboarding: restore built-in adapter fallback

---------

Co-authored-by: Claude Opus 4.6 <[email protected]>
Co-authored-by: Vincent Koc <[email protected]>

* feat(plugins): add provider usage runtime hooks

* feat(plugins): move bundled providers behind plugin hooks

* docs(plugins): document provider runtime usage hooks

* docs(plugins): unify bundle format explainer

* fix: repair onboarding adapter registry imports

* refactor: expand setup wizard input flow

* refactor: move signal imessage mattermost to setup wizard

* docs: document richer setup wizard prompts

* feat(plugins): move anthropic and openai vendors to plugins

* fix: repair onboarding setup-wizard imports

* test(discord): cover startup phase logging

* fix: reduce plugin and discord warning noise

* chore: raise plugin registry cache cap

* build: suppress protobufjs eval warning in tsdown

* refactor: tighten setup wizard onboarding bridge

* refactor: move bluebubbles to setup wizard

* refactor: move nextcloud talk to setup wizard

* CLI: restore lightweight root help and scoped status plugin preload

* Matrix: lazy-load runtime-heavy channel paths

* CI: add CLI startup memory regression check

* MSTeams: lazy-load runtime-heavy channel paths

* refactor: expand setup wizard flow

* refactor: move whatsapp to setup wizard

* refactor: move irc to setup wizard

* refactor: move tlon to setup wizard

* refactor: move googlechat to setup wizard

* refactor: expose setup wizard sdk surfaces

* Feishu: lazy-load runtime-heavy channel paths

* Google Chat: lazy-load runtime-heavy channel paths

* fix: gate setup-only plugin side effects

* feat(web-search): add plugin-backed search providers

* fix(web-search): restore build after plugin rebase

* refactor(web-search): move providers into company plugins

* WhatsApp: lazy-load setup wizard surface

* fix: align channel adapters with plugin sdk

* fix: repair node24 ci type drift

* refactor(google): merge gemini auth into google plugin

* feat(plugins): merge openai vendor seams into one plugin

* refactor(plugins): lazy load provider runtime shims

* perf(cli): trim help startup imports

* perf(status): defer heavy startup loading

* fix(matrix): assert outbound runtime hooks

* refactor: extend setup wizard account resolution

* refactor: move feishu zalo zalouser to setup wizard

* refactor: move matrix msteams twitch to setup wizard

* refactor: drop channel onboarding fallback

* fix: quiet discord startup logs

* Slack: lazy-load setup wizard surface

* Feishu: drop stale runtime onboarding export

* Discord: lazy-load setup wizard surface

* Signal: lazy-load setup wizard surface

* perf(plugins): lazy-load setup surfaces

* fix(cli): repair preaction merge typo

* Signal: restore setup surface helper exports

* iMessage: lazy-load setup wizard surface

* Nextcloud Talk: split setup adapter helpers

* fix: remove stale dist plugin dirs

* BlueBubbles: split setup adapter helpers

* test(plugins): cover retired google auth compatibility

* refactor(tests): share plugin registration helpers

* refactor(plugins): share bundled compat transforms

* refactor(google): split oauth flow modules

* refactor(plugin-sdk): centralize entrypoint manifest

* fix(docs): harden i18n prompt failures

* docs(i18n): sync zh-CN google plugin references

* fix(docs): run i18n through a local rpc client

* build(plugin-sdk): enforce export sync in check

* docs(google): remove stale plugin references

* IRC: split setup adapter helpers

* refactor: move line to setup wizard

* refactor: trim onboarding sdk exports

* Telegram: split setup adapter helpers

* fix: allow plugin package id hints

* Tlon: split setup adapter helpers

* LINE: split setup adapter helpers

* fix: restore ci type checks

* fix: resolve line setup rebase drift

* Mattermost: split setup adapter helpers

* refactor: merge minimax bundled plugins

* docs: refresh zh-CN model providers

* perf(plugins): lazy-load channel setup entrypoints

* Google Chat: split setup adapter helpers

* Matrix: split setup adapter helpers

* MSTeams: split setup adapter helpers

* feat(telegram): add topic-edit action

* fix(telegram): normalize topic-edit targets

* fix: add Telegram topic-edit action (openclaw#47798)

* Feishu: split setup adapter helpers

* fixed main?

* Zalo: split setup adapter helpers

* refactor(plugins): split lightweight channel setup modules

* Zalouser: split setup adapter helpers

* Status: skip unused channel issue scan in JSON mode

* fix(plugins): tighten lazy setup typing

* fix: tighten outbound channel/plugin resolution

* fix(ci): repair security and route test fixtures

* secrets: harden read-only SecretRef command paths and diagnostics (openclaw#47794)

* secrets: harden read-only SecretRef resolution for status and audit

* CLI: add SecretRef degrade-safe regression coverage

* Docs: align SecretRef status and daemon probe semantics

* Security audit: close SecretRef review gaps

* Security audit: preserve source auth SecretRef configuredness

* changelog

Signed-off-by: joshavant <[email protected]>

---------

Signed-off-by: joshavant <[email protected]>

* Gateway: add presence-only probe mode for status

* refactor: move group access into setup wizard

* feat: add nostr setup and unify channel setup discovery

* fix: drop duplicate channel setup import

* feat: add openshell sandbox backend

* feat(system-prompt): replace hardcoded identity with butley-system-prompt.md

Cherry-picked from work (9e1137a). Guilherme's PR #4.
Co-authored-by: Bob

* fix: suppress SSE finalization on retryable rate-limit errors (openclaw#32)

Cherry-picked from work (456f091). Retryable provider errors (429,
overload) no longer kill the SSE stream — keeps it open during
gateway retries/failover so text flows when retry succeeds.

Co-authored-by: Bob

* Status: scope JSON plugin preload to configured channels

* feat: persist previousSessionId chain across session resets (openclaw#34)

Cherry-picked from work (b465233). Adapted: upstream already has
previousSessionEntry — only added previousSessionIdForChain for
fallback chain persistence on reset/new/idle-expiry.

Co-authored-by: Bob

* Status: lazy-load read-only account inspectors

* refactor(core): land plugin auth and startup cleanup

* chore: restore butley-api + clickup-api custom plugins from alpha

These custom extensions were missing from the rebase branch.
Copied from alpha verbatim.

Co-authored-by: Bob

* CLI: route gateway status before program registration

* feat: add remote openshell sandbox mode

* docs: expand openshell sandbox docs

* feat: add firecrawl onboarding search plugin

* Gateway: lazy-load SSH status helpers

* refactor: rename channel setup flow seam

* refactor: move setup fallback into setup registry

* feat: add synology chat setup wizard

* build: add setup entrypoints for migrated channel plugins

* docs: update channel setup docs

* fix: update feishu setup adapter import

* Status: lazy-load channel summary helpers

* Agents: skip eager context warmup for status commands

* Status: route JSON through lean command

* refactor(plugins): move auth and model policy to providers

* fix: control UI sends correct provider prefix when switching models

The model selector was using just the model ID (e.g. "gpt-5.2") as the
option value. When sent to sessions.patch, the server would fall back to
the session's current provider ("anthropic") yielding "anthropic/gpt-5.2"
instead of "openai/gpt-5.2".

Now option values use "provider/model" format, and resolveModelOverrideValue
and resolveDefaultModelValue also return the full provider-prefixed key so
selected state stays consistent.

* fix: format default model label as 'model · provider' for consistency

The default option showed 'Default (openai/gpt-5.2)' while individual
options used the friendlier 'gpt-5.2 · openai' format.

* Nostr: break setup-surface import cycle

* Tests: stabilize bundle MCP env on Windows

* Status: lazy-load channel security and summaries

* Docs: refresh generated config baseline

* test: silence vitest warning noise

* Status: lazy-load text scan helpers

* refactor: rename setup helper surfaces

* test: fix fetch mock typing

* docs: update channel setup wording

* Security: lazy-load channel audit provider helpers

* fix(ui): centralize control model ref handling

* CLI: route gateway status through daemon status

* Status: restore lazy scan runtime typing

* feat: token usage tracking via llm_output hook

* fix: remove duplicate previousSessionEntry declaration

* fix: butley-api extension imports — use openclaw/plugin-sdk instead of relative source paths

* fix: ensure llm_output hook is included in butley-api build output

- Add optional bundled cluster filtering to listBundledPluginBuildEntries()
  to skip extensions with native dependencies (matrix, whatsapp, etc.)
  that cannot be bundled by rolldown on all platforms
- Filter plugin-sdk entries for optional clusters to prevent native
  .node binary bundling failures
- Matches upstream's shouldBuildBundledCluster() pattern
- butley-api dist output now correctly contains both the tool
  registration AND the llm_output hook in the register() default export

* Revert "fix: ensure llm_output hook is included in butley-api build output"

This reverts commit 4558c9d.

* fix: revert tsdown changes, copy butley-api to dist via Dockerfile

* feat: [FORK-PATCH-37] token usage tracking via direct Convex POST

* fix: improve FORK-PATCH-37 logging and add cache token tracking

---------

Signed-off-by: sallyom <[email protected]>
Signed-off-by: joshavant <[email protected]>
Co-authored-by: Jason <[email protected]>
Co-authored-by: Jason Separovic <[email protected]>
Co-authored-by: Claude Opus 4.6 (1M context) <[email protected]>
Co-authored-by: Ayaan Zaidi <[email protected]>
Co-authored-by: Praveen K  Singh <[email protected]>
Co-authored-by: webdevpraveen <[email protected]>
Co-authored-by: SkunkWorks0x <[email protected]>
Co-authored-by: imanisynapse <[email protected]>
Co-authored-by: Onur Solmaz <[email protected]>
Co-authored-by: Sahan <[email protected]>
Co-authored-by: frankekn <[email protected]>
Co-authored-by: Frank Yang <[email protected]>
Co-authored-by: thepagent <[email protected]>
Co-authored-by: Ace Lee <[email protected]>
Co-authored-by: lixuankai <[email protected]>
Co-authored-by: Ted Li <[email protected]>
Co-authored-by: MonkeyLeeT <[email protected]>
Co-authored-by: scoootscooob <[email protected]>
Co-authored-by: 助爪 <[email protected]>
Co-authored-by: xaeon2026 <[email protected]>
Co-authored-by: Andrew Demczuk <[email protected]>
Co-authored-by: Peter Steinberger <[email protected]>
Co-authored-by: Harold Hunt <[email protected]>
Co-authored-by: Tak Hoffman <[email protected]>
Co-authored-by: Vincent Koc <[email protected]>
Co-authored-by: Aditya Chaudhary <[email protected]>
Co-authored-by: Sally O'Malley <[email protected]>
Co-authored-by: Nimrod Gutman <[email protected]>
Co-authored-by: Lucas Machado <[email protected]>
Co-authored-by: xiaoyi <[email protected]>
Co-authored-by: Xiaoyi <[email protected]>
Co-authored-by: peizhe.chen <[email protected]>
Co-authored-by: Tomáš Dinh <[email protected]>
Co-authored-by: Gustavo Madeira Santana <[email protected]>
Co-authored-by: gumadeiras <[email protected]>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Mason <[email protected]>
Co-authored-by: Josh Avant <[email protected]>
Co-authored-by: Christopher Chamaletsos <[email protected]>
Interstellar-code pushed a commit to Interstellar-code/operator1 that referenced this pull request Mar 24, 2026
Preserves explicit `supportsUsageInStreaming` overrides from built-in provider
catalogs and user config instead of unconditionally forcing `false` on non-native
openai-completions endpoints.

Adds `applyNativeStreamingUsageCompat()` to set `supportsUsageInStreaming: true`
on ModelStudio (DashScope) and Moonshot models at config build time so their
native streaming usage works out of the box.

Closes openclaw#46142

Co-authored-by: pezy <[email protected]>
(cherry picked from commit 42837a0)
sbezludny pushed a commit to sbezludny/openclaw that referenced this pull request Mar 27, 2026
Preserves explicit `supportsUsageInStreaming` overrides from built-in provider
catalogs and user config instead of unconditionally forcing `false` on non-native
openai-completions endpoints.

Adds `applyNativeStreamingUsageCompat()` to set `supportsUsageInStreaming: true`
on ModelStudio (DashScope) and Moonshot models at config build time so their
native streaming usage works out of the box.

Closes openclaw#46142

Co-authored-by: pezy <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants