Pr 39540 rebased--Azure models support(GPT-5.4 and more)#48267
Pr 39540 rebased--Azure models support(GPT-5.4 and more)#48267sawyer-shi wants to merge 27 commits intoopenclaw:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2d72eebbb7
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/cli/program/register.onboard.ts
Outdated
| .option("--azure-openai-base-url <url>", "Azure OpenAI base URL") | ||
| .option("--azure-openai-model-id <id>", "Azure OpenAI deployment/model ID") |
There was a problem hiding this comment.
Wire Azure onboarding flags into setupWizardCommand
These Azure flags are declared on the CLI, but this command never forwards azureOpenaiBaseUrl, azureOpenaiModelId, or azureOpenaiApiVersion into the setupWizardCommand options object later in the same function. That means downstream onboarding logic always sees them as undefined; in non-interactive mode, authChoice === "azure-openai-api-key" then hard-fails with the “requires Azure base URL and model/deployment ID” error even when the user supplied both flags.
Useful? React with 👍 / 👎.
Greptile SummaryThis PR adds Azure OpenAI as a first-class auth provider ( Key changes:
Confidence Score: 1/5
Prompt To Fix All With AIThis is a comment left during a code review.
Path: src/commands/auth-choice.apply.ts
Line: 6-9
Comment:
**Duplicate import statements**
Both `normalizeApiKeyTokenProviderAuthChoice` (line 6 and 8) and `applyAuthChoiceAzureOpenAI` (line 7 and 9) are imported twice. This will cause TypeScript compilation errors for duplicate identifier declarations.
```suggestion
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: src/commands/onboard-auth.credentials.ts
Line: 384-394
Comment:
**Missing `safeKey` guard inconsistent with sibling function**
`setOpenrouterApiKey` (directly above this function) has an explicit guard to prevent persisting the literal string `"undefined"` when a prompt returns `undefined` and the caller coerces it with `String(...)`:
```typescript
// Never persist the literal "undefined" (e.g. when prompt returns undefined and caller used String(key)).
const safeInput = typeof key === "string" && key === "undefined" ? "" : key;
```
`setAzureOpenaiApiKey` calls `buildApiKeyCredential` with `key` directly, skipping this guard. The test for "does not persist literal 'undefined'" only covers the interactive path (through `ensureApiKeyFromOptionEnvOrPrompt` + `normalizeApiKeyInput`). If a future caller bypasses that normalization layer, the literal `"undefined"` could be persisted. Consider applying the same pattern for consistency with the other setter functions in this file.
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: 2d72eeb |
src/commands/auth-choice.apply.ts
Outdated
| import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js"; | ||
| import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js"; | ||
| import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js"; | ||
| import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js"; |
There was a problem hiding this comment.
Duplicate import statements
Both normalizeApiKeyTokenProviderAuthChoice (line 6 and 8) and applyAuthChoiceAzureOpenAI (line 7 and 9) are imported twice. This will cause TypeScript compilation errors for duplicate identifier declarations.
| import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js"; | |
| import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js"; | |
| import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js"; | |
| import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js"; | |
| import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js"; | |
| import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js"; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/auth-choice.apply.ts
Line: 6-9
Comment:
**Duplicate import statements**
Both `normalizeApiKeyTokenProviderAuthChoice` (line 6 and 8) and `applyAuthChoiceAzureOpenAI` (line 7 and 9) are imported twice. This will cause TypeScript compilation errors for duplicate identifier declarations.
```suggestion
import { normalizeApiKeyTokenProviderAuthChoice } from "./auth-choice.apply.api-providers.js";
import { applyAuthChoiceAzureOpenAI } from "./auth-choice.apply.azure-openai.js";
```
How can I resolve this? If you propose a fix, please make it concise.| export async function setAzureOpenaiApiKey( | ||
| key: SecretInput, | ||
| agentDir?: string, | ||
| options?: ApiKeyStorageOptions, | ||
| ) { | ||
| upsertAuthProfile({ | ||
| profileId: "azure-openai-responses:default", | ||
| credential: buildApiKeyCredential("azure-openai-responses", key, undefined, options), | ||
| agentDir: resolveAuthAgentDir(agentDir), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Missing safeKey guard inconsistent with sibling function
setOpenrouterApiKey (directly above this function) has an explicit guard to prevent persisting the literal string "undefined" when a prompt returns undefined and the caller coerces it with String(...):
// Never persist the literal "undefined" (e.g. when prompt returns undefined and caller used String(key)).
const safeInput = typeof key === "string" && key === "undefined" ? "" : key;setAzureOpenaiApiKey calls buildApiKeyCredential with key directly, skipping this guard. The test for "does not persist literal 'undefined'" only covers the interactive path (through ensureApiKeyFromOptionEnvOrPrompt + normalizeApiKeyInput). If a future caller bypasses that normalization layer, the literal "undefined" could be persisted. Consider applying the same pattern for consistency with the other setter functions in this file.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/onboard-auth.credentials.ts
Line: 384-394
Comment:
**Missing `safeKey` guard inconsistent with sibling function**
`setOpenrouterApiKey` (directly above this function) has an explicit guard to prevent persisting the literal string `"undefined"` when a prompt returns `undefined` and the caller coerces it with `String(...)`:
```typescript
// Never persist the literal "undefined" (e.g. when prompt returns undefined and caller used String(key)).
const safeInput = typeof key === "string" && key === "undefined" ? "" : key;
```
`setAzureOpenaiApiKey` calls `buildApiKeyCredential` with `key` directly, skipping this guard. The test for "does not persist literal 'undefined'" only covers the interactive path (through `ensureApiKeyFromOptionEnvOrPrompt` + `normalizeApiKeyInput`). If a future caller bypasses that normalization layer, the literal `"undefined"` could be persisted. Consider applying the same pattern for consistency with the other setter functions in this file.
How can I resolve this? If you propose a fix, please make it concise.2d72eeb to
06b5c5e
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 06b5c5e20f
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| config: params.nextConfig, | ||
| env: process.env, | ||
| }); | ||
| const { authChoice, opts, runtime, baseConfig } = params; |
There was a problem hiding this comment.
Normalize legacy apiKey auth choice before dispatch
applyNonInteractiveAuthChoice now dispatches directly on params.authChoice, so the apiKey compatibility path is no longer remapped via normalizeApiKeyTokenProviderAuthChoice. In non-interactive runs that still pass --auth-choice apiKey --token-provider <provider>, the provider hint is ignored and provider-specific auth setup can be skipped (or routed to the wrong default), leaving onboarding without the requested credentials/model wiring.
Useful? React with 👍 / 👎.
| const normalizedProviderAuthChoice = normalizeApiKeyTokenProviderAuthChoice({ | ||
| authChoice: normalizedAuthChoice, | ||
| tokenProvider: params.opts?.tokenProvider, | ||
| config: params.config, | ||
| env: process.env, | ||
| }); |
There was a problem hiding this comment.
Pass config/env when normalizing apiKey token providers
normalizeApiKeyTokenProviderAuthChoice can resolve manifest-backed providers only when it receives the active config/workspace/env context, but this call now passes only authChoice and tokenProvider. That breaks authChoice=apiKey resolution for third-party provider manifests from configured plugin paths, so the requested provider auth flow is not selected and setup falls back to raw apiKey handling.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c40eb68628
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| | "chutes" | ||
| | "openai-codex" | ||
| | "openai-api-key" | ||
| | "azure-openai-api-key" |
There was a problem hiding this comment.
Register Azure auth choice in onboarding picker catalog
Adding "azure-openai-api-key" to AuthChoice makes the flow callable by explicit flag, but the interactive picker still builds from CORE_AUTH_CHOICE_OPTIONS plus plugin/manifests, and this commit does not register an Azure choice in either source. In openclaw onboard runs where --auth-choice is omitted, users cannot select Azure OpenAI at all, so the new applyAuthChoiceAzureOpenAI path is effectively unreachable unless users already know and pass the hidden choice id manually.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3d181dc742
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if ( | ||
| authChoice === "zai-api-key" || | ||
| authChoice === "zai-coding-global" || | ||
| authChoice === "zai-coding-cn" || | ||
| authChoice === "zai-global" || | ||
| authChoice === "zai-cn" | ||
| ) { |
There was a problem hiding this comment.
Keep plugin-backed auth choices behind plugin checks
These new early-return branches handle zai-* auth choices (and similarly minimax-* later in this function) before applyNonInteractivePluginProviderChoice runs, so non-interactive onboarding no longer reaches the plugin enable/deny guard path (enablePluginInConfig) for those providers. In configs where a provider plugin is disabled or denylisted, --auth-choice zai-*/minimax-* now silently writes auth+model config instead of failing with the expected disabled-plugin error, which regresses policy enforcement and can leave users with inconsistent setup state.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8655783fd1
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| plugins: { | ||
| ...cfg.plugins, | ||
| allow: Array.from(allow), | ||
| entries: { | ||
| ...cfg.plugins?.entries, | ||
| [providerId]: { | ||
| ...cfg.plugins?.entries?.[providerId], | ||
| enabled: true, | ||
| }, | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Re-enable disabled plugin entries during provider resolution
When a provider is explicitly disabled via plugins.entries.<id>.enabled=false, this helper now only amends plugins.allow and leaves the disabled entry in place. In non-interactive onboarding, that prevents resolvePluginProviders from seeing the selected provider auth choice, so applyNonInteractivePluginProviderChoice returns undefined and setup can silently continue without applying the requested provider auth/model config. Restoring an entries[providerId].enabled=true override in this isolated resolution config avoids that regression while still letting later guard logic enforce denylist/global plugin disables.
Useful? React with 👍 / 👎.
# Conflicts: # src/commands/auth-choice.preferred-provider.ts # src/commands/onboard-auth.ts # src/commands/onboard-non-interactive/local/auth-choice.ts
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3dcc03a4fe
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| | "chutes" | ||
| | "openai-codex" | ||
| | "openai-api-key" | ||
| | "azure-openai-api-key" |
There was a problem hiding this comment.
Implement non-interactive routing for azure-openai-api-key
Adding azure-openai-api-key as a built-in auth choice exposes it to non-interactive onboarding, but that flow has no Azure-specific branch in applyNonInteractiveAuthChoice and depends on plugin choice resolution for unknown values. I checked the manifest-backed choice sources (extensions/*/openclaw.plugin.json via resolveManifestProviderAuthChoices) and there is no choiceId for azure-openai-api-key, so applyNonInteractivePluginProviderChoice can return undefined and onboarding falls through with unchanged config. In openclaw onboard --non-interactive ... --auth-choice azure-openai-api-key (or when inferred from --azure-openai-api-key), this can succeed without storing Azure auth/model config.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ae3b477dfc
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| input: ["text", "image"], | ||
| cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, | ||
| contextWindow: 200000, | ||
| maxTokens: 8192, | ||
| compat: { supportsStore: false }, |
There was a problem hiding this comment.
Stop disabling
store for Azure Responses models
When onboarding creates an azure-openai-responses model, this hard-codes compat.supportsStore=false. In src/agents/pi-embedded-runner/openai-stream-wrappers.ts, that makes shouldForceResponsesStore bail out early and shouldStripResponsesStore remove store from every Responses payload. Our own regression coverage for direct Azure endpoints (src/agents/pi-embedded-runner-extraparams.test.ts:1666) expects store=true, so an onboarded Azure model can break multi-turn/server-compaction flows even though setup appeared to succeed.
Useful? React with 👍 / 👎.
| return ( | ||
| normalized.endsWith(".openai.azure.com") || | ||
| normalized.endsWith(".services.ai.azure.com") || | ||
| normalized.endsWith(".cognitiveservices.azure.com") |
There was a problem hiding this comment.
Accept Azure Government endpoints in Azure URL validation
Azure OpenAI is not limited to the public-cloud suffixes in this allowlist. Microsoft documents Azure Government service endpoints as openai.azure.us, but isAzureHost only permits .openai.azure.com, .services.ai.azure.com, and .cognitiveservices.azure.com. In government tenants, openclaw onboard --auth-choice azure-openai-api-key --azure-openai-base-url https://...openai.azure.us will reject a valid endpoint, so the new Azure setup path is unusable there.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b14b4bec8b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| token: params.opts?.azureOpenaiApiKey, | ||
| tokenProvider: "azure-openai-responses", | ||
| secretInputMode: requestedSecretInputMode, | ||
| config: nextConfig, | ||
| expectedProviders: ["azure-openai-responses"], |
There was a problem hiding this comment.
Pass Azure CLI overrides into interactive auth setup
These reads only work when the caller forwards the Azure-specific opts, but interactive onboarding still invokes applyAuthChoice with only tokenProvider/token in src/wizard/setup.ts:467-476. As a result, openclaw onboard --auth-choice azure-openai-api-key --azure-openai-api-key ... --azure-openai-base-url ... --azure-openai-model-id ... ignores the supplied Azure values and reprompts for them, so the new Azure CLI flags are effectively usable only in --non-interactive runs.
Useful? React with 👍 / 👎.
| return { | ||
| id: modelId, | ||
| name: `Azure OpenAI ${modelId}`, | ||
| reasoning: false, |
There was a problem hiding this comment.
Preserve reasoning capability for onboarded Azure GPT-5 models
Hard-coding reasoning: false here makes every model created by the new Azure flow look non-reasoning, even for GPT-5 IDs like gpt-5.4. OpenClaw uses the catalog entry’s reasoning bit to choose the default think level in src/auto-reply/thinking.shared.ts:114-119, so an onboarded azure-openai-responses/gpt-5.4 will default to think=off and won’t be treated like the reasoning-capable GPT-5 models elsewhere in the catalog.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fa42f6f5e5
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/commands/azure-openai-config.ts
Outdated
| contextWindow: 200000, | ||
| maxTokens: 8192, |
There was a problem hiding this comment.
Use OpenAI-equivalent token limits for Azure GPT-5.4 models
The new Azure model catalog entry hard-codes contextWindow: 200000 and maxTokens: 8192 for every Azure model id, but the repo already treats the same GPT-5.4 ids on OpenAI as much larger (extensions/openai/openai-provider.ts sets GPT-5.4 / GPT-5.4-pro to 1,050,000 context and 128,000 max output). Those values are consumed by compaction/pruning logic in src/agents/pi-extensions/compaction-safeguard.ts:776-807, so an onboarded azure-openai-responses/gpt-5.4 will summarize and trim history far earlier than necessary and can inherit an artificially low output cap.
Useful? React with 👍 / 👎.
| export const AZURE_OPENAI_PROVIDER_ID = "azure-openai-responses"; | ||
| export const AZURE_OPENAI_DEFAULT_MODEL_ID = "gpt-4.1"; | ||
| export const AZURE_OPENAI_DEFAULT_MODEL_REF = `${AZURE_OPENAI_PROVIDER_ID}/${AZURE_OPENAI_DEFAULT_MODEL_ID}`; |
There was a problem hiding this comment.
Register Azure Responses models in runtime thinking policies
Introducing azure-openai-responses here without a matching provider-runtime policy means Azure GPT-5.4 models never inherit the GPT-5 capabilities we already expose for OpenAI. supportsXHighThinking() only consults provider hooks before rejecting /think xhigh in src/agents/agent-command.ts:1104, and isModernModelRef() in src/agents/live-model-filter.ts:8-24 has the same limitation; meanwhile the OpenAI plugin explicitly marks the same GPT-5.4 ids as xhigh/modern in extensions/openai/openai-provider.ts:178-179. Users onboarded to azure-openai-responses/gpt-5.4 or gpt-5.4-pro will therefore lose those behaviors even though they are the same model family.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7aed699f67
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| function isAzureGpt54ClassModel(modelId: string): boolean { | ||
| const normalized = normalizeAzureModelIdForPolicy(modelId); | ||
| return normalized === "gpt-5.4" || normalized === "gpt-5.4-pro"; |
There was a problem hiding this comment.
Handle custom Azure deployment names for GPT-5.4
When the Azure deployment name is customized (which this flow explicitly allows by asking for a “deployment/model ID”), this helper only recognizes literal gpt-5.4 / gpt-5.4-pro. Onboarding --azure-openai-model-id prod-gpt54 therefore writes the fallback 200k/8192 generic metadata, and the new bundled xhigh/modern policy hooks key off the same raw ID, so real GPT-5.4 deployments behind custom aliases lose the larger limits and GPT-5.4-specific behaviors.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 38c289aa57
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| providers[AZURE_OPENAI_PROVIDER_ID] = { | ||
| ...existingProviderRest, | ||
| baseUrl, | ||
| api: "openai-responses", | ||
| ...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}), | ||
| models: mergedModels, |
There was a problem hiding this comment.
Treat azure-openai-responses as an OpenAI-family provider
This introduces a new provider id, but the OpenAI-family gates were not updated for it. I checked src/agents/pi-tools.ts:62-65,357-360 and src/agents/transcript-policy.ts:48-84: an onboarded azure-openai-responses/gpt-5.4 session is still classified as default, so apply_patch is never added even when tools.exec.applyPatch.enabled=true, and transcript replay takes the non-OpenAI sanitization path. Because this provider is configured here with api: "openai-responses", Azure GPT-5.4 users lose OpenAI-specific agent behaviors that the new onboarding flow appears to promise.
Useful? React with 👍 / 👎.
Summary
openclaw/openclaw:main, leaving unresolved merge conflicts andChecks awaiting conflict resolution; conflict resolution also regressed non-interactive onboarding auth-choice routing.openclaw/openclaw:main, resolved onboarding/auth-choice conflicts, restored non-interactive auth routing order, and preserved Azure OpenAI non-interactive inference wiring (--azure-openai-api-key).Change Type
Scope
Linked Issue/PR
User-visible / Behavior Changes
--azure-openai-api-keyremains wired correctly.Security Impact
Repro + Verification
Environment
Steps
openclaw/openclaw:main.Expected
Actual
Evidence
pnpm test -- src/commands/onboard-non-interactive.provider-auth.test.tspnpm test -- src/commands/azure-openai-config.test.ts src/commands/auth-choice.apply.azure-openai.test.tsLocal result: 55/55 tests passed
Failing test/log before + passing after
Trace/log snippets
Screenshot/recording
Perf numbers (if relevant)
Human Verification
Review Conversations
Compatibility / Migration
Failure Recovery (if this breaks)
Risks and Mitigations
mainmoving quickly could reintroduce conflicts before merge.mainbefore final merge.