Skip to content

Pr 39540 rebased--Azure models support(GPT-5.4 and more)#48267

Open
sawyer-shi wants to merge 27 commits intoopenclaw:mainfrom
sawyer-shi:pr-39540-rebased
Open

Pr 39540 rebased--Azure models support(GPT-5.4 and more)#48267
sawyer-shi wants to merge 27 commits intoopenclaw:mainfrom
sawyer-shi:pr-39540-rebased

Conversation

@sawyer-shi
Copy link
Copy Markdown

@sawyer-shi sawyer-shi commented Mar 16, 2026

Summary

  • Problem: The PR branch repeatedly diverged from openclaw/openclaw:main, leaving unresolved merge conflicts and Checks awaiting conflict resolution; conflict resolution also regressed non-interactive onboarding auth-choice routing.
  • Why it matters: Unresolved conflicts block CI and merge, and routing regressions can break provider auth setup/default model behavior in non-interactive onboarding.
  • What changed: Rebased onto latest openclaw/openclaw:main, resolved onboarding/auth-choice conflicts, restored non-interactive auth routing order, and preserved Azure OpenAI non-interactive inference wiring (--azure-openai-api-key).
  • What did NOT change (scope boundary): No new provider features, no auth architecture redesign, no unrelated refactors outside onboarding/auth-choice conflict surfaces.

Change Type

  • Bug fix
  • Refactor
  • Feature
  • Docs
  • Security hardening
  • Chore/infra

Scope

  • Auth / tokens
  • API / contracts
  • UI / DX
  • Gateway / orchestration
  • Skills / tool execution
  • Memory / storage
  • Integrations
  • CI/CD / infra

Linked Issue/PR

User-visible / Behavior Changes

  • Non-interactive onboarding auth-choice inference/routing is restored after rebase conflict resolution.
  • Azure non-interactive auth-choice inference via --azure-openai-api-key remains wired correctly.
  • No intentional new user-facing features; this PR restores expected behavior and unblocks mergeability.

Security Impact

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? No
  • New/changed network calls? No
  • Command/tool execution surface changed? No
  • Data access scope changed? No
  • If any Yes, explain risk + mitigation: N/A

Repro + Verification

Environment

  • OS: Windows (local dev)
  • Runtime/container: Node 22 + pnpm
  • Model/provider: onboarding auth-choice paths (Azure OpenAI + provider auth suites)
  • Integration/channel (if any): None
  • Relevant config (redacted): default local test setup, no real secrets committed

Steps

  1. Rebase PR branch onto latest openclaw/openclaw:main.
  2. Resolve onboarding/auth-choice conflicts.
  3. Run targeted onboarding/auth regression tests.

Expected

  • Rebase completes with no conflict markers.
  • Targeted onboarding/auth suites pass.
  • Branch is no longer blocked by conflict-only CI state.

Actual

  • Rebase and conflict resolution completed successfully.
  • Targeted suites passed locally.
  • Branch updates pushed to fork branch for PR refresh.

Evidence

  • pnpm test -- src/commands/onboard-non-interactive.provider-auth.test.ts

  • pnpm test -- src/commands/azure-openai-config.test.ts src/commands/auth-choice.apply.azure-openai.test.ts

  • Local result: 55/55 tests passed

  • Failing test/log before + passing after

  • Trace/log snippets

  • Screenshot/recording

  • Perf numbers (if relevant)

Human Verification

  • Verified scenarios: non-interactive provider auth inference/routing; Azure auth-choice apply/config paths; conflict-marker cleanup in touched onboarding/auth-choice files.
  • Edge cases checked: Azure auth flag inference path; provider auth-choice fallback routing under non-interactive flow.
  • What you did not verify: full cross-platform CI matrix and unrelated integrations/channels.

Review Conversations

  • I replied to or resolved every bot review conversation I addressed in this PR.
  • I left unresolved only the conversations that still need reviewer or maintainer judgment.

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? No
  • Migration needed? No
  • If yes, exact upgrade steps: N/A

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly: Revert this PR’s conflict-resolution/routing commits.
  • Files/config to restore: onboarding/auth-choice touched files only.
  • Known bad symptoms reviewers should watch for: non-interactive provider auth not inferred/applied; expected default model/profile not set for provider auth choices.

Risks and Mitigations

  • Risk: subtle routing behavior drift in non-interactive auth-choice handling after conflict resolution.
    • Mitigation: targeted onboarding/auth regression suites executed and passing.
  • Risk: upstream main moving quickly could reintroduce conflicts before merge.
    • Mitigation: keep branch rebased on latest main before final merge.

@sawyer-shi sawyer-shi requested a review from a team as a code owner March 16, 2026 14:20
@openclaw-barnacle openclaw-barnacle bot added docs Improvements or additions to documentation cli CLI command changes commands Command implementations agents Agent runtime and tooling size: XL labels Mar 16, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2d72eebbb7

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +109 to +110
.option("--azure-openai-base-url <url>", "Azure OpenAI base URL")
.option("--azure-openai-model-id <id>", "Azure OpenAI deployment/model ID")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Wire Azure onboarding flags into setupWizardCommand

These Azure flags are declared on the CLI, but this command never forwards azureOpenaiBaseUrl, azureOpenaiModelId, or azureOpenaiApiVersion into the setupWizardCommand options object later in the same function. That means downstream onboarding logic always sees them as undefined; in non-interactive mode, authChoice === "azure-openai-api-key" then hard-fails with the “requires Azure base URL and model/deployment ID” error even when the user supplied both flags.

Useful? React with 👍 / 👎.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 16, 2026

Greptile Summary

This PR adds Azure OpenAI as a first-class auth provider (azure-openai-responses) to OpenClaw, introducing CLI flags (--azure-openai-api-key, --azure-openai-base-url, --azure-openai-model-id, --azure-openai-api-version), interactive and non-interactive onboarding flows, host-pattern validation, and per-model azureApiVersion param forwarding to the streaming layer. The implementation is well-tested with unit and integration tests across the new paths.

Key changes:

  • New azure-openai-config.ts with URL normalization (always resolves to <origin>/openai/v1) and host allowlist validation (*.openai.azure.com, *.services.ai.azure.com, *.cognitiveservices.azure.com)
  • New auth-choice.apply.azure-openai.ts handler wired into the existing applyAuthChoice handler chain
  • isAzureOpenAIHost helper in openai-stream-wrappers.ts expanded to cover the two new Azure host suffixes
  • azure-openai-responses added to provider-env-vars.ts for AZURE_OPENAI_API_KEY auto-detection
  • Breaking defect: src/commands/auth-choice.apply.ts has duplicate import statements for both normalizeApiKeyTokenProviderAuthChoice and applyAuthChoiceAzureOpenAI (lines 6–9), which will cause TypeScript compilation errors and prevent the build from succeeding
  • setAzureOpenaiApiKey in onboard-auth.credentials.ts is missing the safeKey guard present in setOpenrouterApiKey that prevents persisting the literal string "undefined"

Confidence Score: 1/5

  • Not safe to merge — duplicate import statements in auth-choice.apply.ts will break the TypeScript build.
  • The duplicate import declarations for normalizeApiKeyTokenProviderAuthChoice and applyAuthChoiceAzureOpenAI in src/commands/auth-choice.apply.ts (lines 6–9) are a hard compiler error that will fail the build. The underlying feature logic and test coverage are solid, but this defect must be fixed before the PR can land.
  • src/commands/auth-choice.apply.ts — duplicate imports cause a build-breaking TypeScript error
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/commands/auth-choice.apply.ts
Line: 6-9

Comment:
**Duplicate import statements**

Both `normalizeApiKeyTokenProviderAuthChoice` (line 6 and 8) and `applyAuthChoiceAzureOpenAI` (line 7 and 9) are imported twice. This will cause TypeScript compilation errors for duplicate identifier declarations.

```suggestion
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/commands/onboard-auth.credentials.ts
Line: 384-394

Comment:
**Missing `safeKey` guard inconsistent with sibling function**

`setOpenrouterApiKey` (directly above this function) has an explicit guard to prevent persisting the literal string `"undefined"` when a prompt returns `undefined` and the caller coerces it with `String(...)`:

```typescript
// Never persist the literal "undefined" (e.g. when prompt returns undefined and caller used String(key)).
const safeInput = typeof key === "string" && key === "undefined" ? "" : key;
```

`setAzureOpenaiApiKey` calls `buildApiKeyCredential` with `key` directly, skipping this guard. The test for "does not persist literal 'undefined'" only covers the interactive path (through `ensureApiKeyFromOptionEnvOrPrompt` + `normalizeApiKeyInput`). If a future caller bypasses that normalization layer, the literal `"undefined"` could be persisted. Consider applying the same pattern for consistency with the other setter functions in this file.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: 2d72eeb

Comment on lines +6 to +9
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate import statements

Both normalizeApiKeyTokenProviderAuthChoice (line 6 and 8) and applyAuthChoiceAzureOpenAI (line 7 and 9) are imported twice. This will cause TypeScript compilation errors for duplicate identifier declarations.

Suggested change
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/auth-choice.apply.ts
Line: 6-9

Comment:
**Duplicate import statements**

Both `normalizeApiKeyTokenProviderAuthChoice` (line 6 and 8) and `applyAuthChoiceAzureOpenAI` (line 7 and 9) are imported twice. This will cause TypeScript compilation errors for duplicate identifier declarations.

```suggestion
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +384 to +394
export async function setAzureOpenaiApiKey(
key: SecretInput,
agentDir?: string,
options?: ApiKeyStorageOptions,
) {
upsertAuthProfile({
profileId: "azure-openai-responses:default",
credential: buildApiKeyCredential("azure-openai-responses", key, undefined, options),
agentDir: resolveAuthAgentDir(agentDir),
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing safeKey guard inconsistent with sibling function

setOpenrouterApiKey (directly above this function) has an explicit guard to prevent persisting the literal string "undefined" when a prompt returns undefined and the caller coerces it with String(...):

// Never persist the literal "undefined" (e.g. when prompt returns undefined and caller used String(key)).
const safeInput = typeof key === "string" && key === "undefined" ? "" : key;

setAzureOpenaiApiKey calls buildApiKeyCredential with key directly, skipping this guard. The test for "does not persist literal 'undefined'" only covers the interactive path (through ensureApiKeyFromOptionEnvOrPrompt + normalizeApiKeyInput). If a future caller bypasses that normalization layer, the literal "undefined" could be persisted. Consider applying the same pattern for consistency with the other setter functions in this file.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/onboard-auth.credentials.ts
Line: 384-394

Comment:
**Missing `safeKey` guard inconsistent with sibling function**

`setOpenrouterApiKey` (directly above this function) has an explicit guard to prevent persisting the literal string `"undefined"` when a prompt returns `undefined` and the caller coerces it with `String(...)`:

```typescript
// Never persist the literal "undefined" (e.g. when prompt returns undefined and caller used String(key)).
const safeInput = typeof key === "string" && key === "undefined" ? "" : key;
```

`setAzureOpenaiApiKey` calls `buildApiKeyCredential` with `key` directly, skipping this guard. The test for "does not persist literal 'undefined'" only covers the interactive path (through `ensureApiKeyFromOptionEnvOrPrompt` + `normalizeApiKeyInput`). If a future caller bypasses that normalization layer, the literal `"undefined"` could be persisted. Consider applying the same pattern for consistency with the other setter functions in this file.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 06b5c5e20f

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

config: params.nextConfig,
env: process.env,
});
const { authChoice, opts, runtime, baseConfig } = params;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Normalize legacy apiKey auth choice before dispatch

applyNonInteractiveAuthChoice now dispatches directly on params.authChoice, so the apiKey compatibility path is no longer remapped via normalizeApiKeyTokenProviderAuthChoice. In non-interactive runs that still pass --auth-choice apiKey --token-provider <provider>, the provider hint is ignored and provider-specific auth setup can be skipped (or routed to the wrong default), leaving onboarding without the requested credentials/model wiring.

Useful? React with 👍 / 👎.

Comment on lines 33 to 36
const normalizedProviderAuthChoice = normalizeApiKeyTokenProviderAuthChoice({
authChoice: normalizedAuthChoice,
tokenProvider: params.opts?.tokenProvider,
config: params.config,
env: process.env,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Pass config/env when normalizing apiKey token providers

normalizeApiKeyTokenProviderAuthChoice can resolve manifest-backed providers only when it receives the active config/workspace/env context, but this call now passes only authChoice and tokenProvider. That breaks authChoice=apiKey resolution for third-party provider manifests from configured plugin paths, so the requested provider auth flow is not selected and setup falls back to raw apiKey handling.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c40eb68628

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

| "chutes"
| "openai-codex"
| "openai-api-key"
| "azure-openai-api-key"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Register Azure auth choice in onboarding picker catalog

Adding "azure-openai-api-key" to AuthChoice makes the flow callable by explicit flag, but the interactive picker still builds from CORE_AUTH_CHOICE_OPTIONS plus plugin/manifests, and this commit does not register an Azure choice in either source. In openclaw onboard runs where --auth-choice is omitted, users cannot select Azure OpenAI at all, so the new applyAuthChoiceAzureOpenAI path is effectively unreachable unless users already know and pass the hidden choice id manually.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3d181dc742

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +186 to +192
if (
authChoice === "zai-api-key" ||
authChoice === "zai-coding-global" ||
authChoice === "zai-coding-cn" ||
authChoice === "zai-global" ||
authChoice === "zai-cn"
) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Keep plugin-backed auth choices behind plugin checks

These new early-return branches handle zai-* auth choices (and similarly minimax-* later in this function) before applyNonInteractivePluginProviderChoice runs, so non-interactive onboarding no longer reaches the plugin enable/deny guard path (enablePluginInConfig) for those providers. In configs where a provider plugin is disabled or denylisted, --auth-choice zai-*/minimax-* now silently writes auth+model config instead of failing with the expected disabled-plugin error, which regresses policy enforcement and can leave users with inconsistent setup state.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8655783fd1

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 35 to 38
plugins: {
...cfg.plugins,
allow: Array.from(allow),
entries: {
...cfg.plugins?.entries,
[providerId]: {
...cfg.plugins?.entries?.[providerId],
enabled: true,
},
},
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Re-enable disabled plugin entries during provider resolution

When a provider is explicitly disabled via plugins.entries.<id>.enabled=false, this helper now only amends plugins.allow and leaves the disabled entry in place. In non-interactive onboarding, that prevents resolvePluginProviders from seeing the selected provider auth choice, so applyNonInteractivePluginProviderChoice returns undefined and setup can silently continue without applying the requested provider auth/model config. Restoring an entries[providerId].enabled=true override in this isolated resolution config avoids that regression while still letting later guard logic enforce denylist/global plugin disables.

Useful? React with 👍 / 👎.

# Conflicts:
#	src/commands/auth-choice.preferred-provider.ts
#	src/commands/onboard-auth.ts
#	src/commands/onboard-non-interactive/local/auth-choice.ts
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3dcc03a4fe

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

| "chutes"
| "openai-codex"
| "openai-api-key"
| "azure-openai-api-key"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Implement non-interactive routing for azure-openai-api-key

Adding azure-openai-api-key as a built-in auth choice exposes it to non-interactive onboarding, but that flow has no Azure-specific branch in applyNonInteractiveAuthChoice and depends on plugin choice resolution for unknown values. I checked the manifest-backed choice sources (extensions/*/openclaw.plugin.json via resolveManifestProviderAuthChoices) and there is no choiceId for azure-openai-api-key, so applyNonInteractivePluginProviderChoice can return undefined and onboarding falls through with unchanged config. In openclaw onboard --non-interactive ... --auth-choice azure-openai-api-key (or when inferred from --azure-openai-api-key), this can succeed without storing Azure auth/model config.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ae3b477dfc

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +71 to +75
input: ["text", "image"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 200000,
maxTokens: 8192,
compat: { supportsStore: false },
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Stop disabling store for Azure Responses models

When onboarding creates an azure-openai-responses model, this hard-codes compat.supportsStore=false. In src/agents/pi-embedded-runner/openai-stream-wrappers.ts, that makes shouldForceResponsesStore bail out early and shouldStripResponsesStore remove store from every Responses payload. Our own regression coverage for direct Azure endpoints (src/agents/pi-embedded-runner-extraparams.test.ts:1666) expects store=true, so an onboarded Azure model can break multi-turn/server-compaction flows even though setup appeared to succeed.

Useful? React with 👍 / 👎.

Comment on lines +15 to +18
return (
normalized.endsWith(".openai.azure.com") ||
normalized.endsWith(".services.ai.azure.com") ||
normalized.endsWith(".cognitiveservices.azure.com")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Accept Azure Government endpoints in Azure URL validation

Azure OpenAI is not limited to the public-cloud suffixes in this allowlist. Microsoft documents Azure Government service endpoints as openai.azure.us, but isAzureHost only permits .openai.azure.com, .services.ai.azure.com, and .cognitiveservices.azure.com. In government tenants, openclaw onboard --auth-choice azure-openai-api-key --azure-openai-base-url https://...openai.azure.us will reject a valid endpoint, so the new Azure setup path is unusable there.

Useful? React with 👍 / 👎.

@sawyer-shi sawyer-shi changed the title Pr 39540 rebased Pr 39540 rebased--Azure models support(GPT-5.4 and more) Mar 18, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b14b4bec8b

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +81 to +85
token: params.opts?.azureOpenaiApiKey,
tokenProvider: "azure-openai-responses",
secretInputMode: requestedSecretInputMode,
config: nextConfig,
expectedProviders: ["azure-openai-responses"],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Pass Azure CLI overrides into interactive auth setup

These reads only work when the caller forwards the Azure-specific opts, but interactive onboarding still invokes applyAuthChoice with only tokenProvider/token in src/wizard/setup.ts:467-476. As a result, openclaw onboard --auth-choice azure-openai-api-key --azure-openai-api-key ... --azure-openai-base-url ... --azure-openai-model-id ... ignores the supplied Azure values and reprompts for them, so the new Azure CLI flags are effectively usable only in --non-interactive runs.

Useful? React with 👍 / 👎.

return {
id: modelId,
name: `Azure OpenAI ${modelId}`,
reasoning: false,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve reasoning capability for onboarded Azure GPT-5 models

Hard-coding reasoning: false here makes every model created by the new Azure flow look non-reasoning, even for GPT-5 IDs like gpt-5.4. OpenClaw uses the catalog entry’s reasoning bit to choose the default think level in src/auto-reply/thinking.shared.ts:114-119, so an onboarded azure-openai-responses/gpt-5.4 will default to think=off and won’t be treated like the reasoning-capable GPT-5 models elsewhere in the catalog.

Useful? React with 👍 / 👎.

@openclaw-barnacle openclaw-barnacle bot added the channel: nostr Channel integration: nostr label Mar 18, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fa42f6f5e5

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +73 to +74
contextWindow: 200000,
maxTokens: 8192,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use OpenAI-equivalent token limits for Azure GPT-5.4 models

The new Azure model catalog entry hard-codes contextWindow: 200000 and maxTokens: 8192 for every Azure model id, but the repo already treats the same GPT-5.4 ids on OpenAI as much larger (extensions/openai/openai-provider.ts sets GPT-5.4 / GPT-5.4-pro to 1,050,000 context and 128,000 max output). Those values are consumed by compaction/pruning logic in src/agents/pi-extensions/compaction-safeguard.ts:776-807, so an onboarded azure-openai-responses/gpt-5.4 will summarize and trim history far earlier than necessary and can inherit an artificially low output cap.

Useful? React with 👍 / 👎.

Comment on lines +8 to +10
export const AZURE_OPENAI_PROVIDER_ID = "azure-openai-responses";
export const AZURE_OPENAI_DEFAULT_MODEL_ID = "gpt-4.1";
export const AZURE_OPENAI_DEFAULT_MODEL_REF = `${AZURE_OPENAI_PROVIDER_ID}/${AZURE_OPENAI_DEFAULT_MODEL_ID}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Register Azure Responses models in runtime thinking policies

Introducing azure-openai-responses here without a matching provider-runtime policy means Azure GPT-5.4 models never inherit the GPT-5 capabilities we already expose for OpenAI. supportsXHighThinking() only consults provider hooks before rejecting /think xhigh in src/agents/agent-command.ts:1104, and isModernModelRef() in src/agents/live-model-filter.ts:8-24 has the same limitation; meanwhile the OpenAI plugin explicitly marks the same GPT-5.4 ids as xhigh/modern in extensions/openai/openai-provider.ts:178-179. Users onboarded to azure-openai-responses/gpt-5.4 or gpt-5.4-pro will therefore lose those behaviors even though they are the same model family.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7aed699f67

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +19 to +21
function isAzureGpt54ClassModel(modelId: string): boolean {
const normalized = normalizeAzureModelIdForPolicy(modelId);
return normalized === "gpt-5.4" || normalized === "gpt-5.4-pro";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Handle custom Azure deployment names for GPT-5.4

When the Azure deployment name is customized (which this flow explicitly allows by asking for a “deployment/model ID”), this helper only recognizes literal gpt-5.4 / gpt-5.4-pro. Onboarding --azure-openai-model-id prod-gpt54 therefore writes the fallback 200k/8192 generic metadata, and the new bundled xhigh/modern policy hooks key off the same raw ID, so real GPT-5.4 deployments behind custom aliases lose the larger limits and GPT-5.4-specific behaviors.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 38c289aa57

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +137 to +142
providers[AZURE_OPENAI_PROVIDER_ID] = {
...existingProviderRest,
baseUrl,
api: "openai-responses",
...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}),
models: mergedModels,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Treat azure-openai-responses as an OpenAI-family provider

This introduces a new provider id, but the OpenAI-family gates were not updated for it. I checked src/agents/pi-tools.ts:62-65,357-360 and src/agents/transcript-policy.ts:48-84: an onboarded azure-openai-responses/gpt-5.4 session is still classified as default, so apply_patch is never added even when tools.exec.applyPatch.enabled=true, and transcript replay takes the non-OpenAI sanitization path. Because this provider is configured here with api: "openai-responses", Azure GPT-5.4 users lose OpenAI-specific agent behaviors that the new onboarding flow appears to promise.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling channel: nostr Channel integration: nostr cli CLI command changes commands Command implementations docs Improvements or additions to documentation extensions: acpx scripts Repository scripts size: XL

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants