New:Add support for Azure models【Supports the GPT-5.4 and more models】#39540
New:Add support for Azure models【Supports the GPT-5.4 and more models】#39540sawyer-shi wants to merge 10 commits intoopenclaw:mainfrom
Conversation
Greptile SummaryThis PR adds first-class Azure OpenAI support to OpenClaw, including a new Key observations:
Confidence Score: 4/5
Last reviewed commit: e6a0b45 |
| let parsed: URL; | ||
| try { | ||
| parsed = new URL(candidate); | ||
| } catch { | ||
| throw new Error( | ||
| "Azure OpenAI base URL must be a valid URL (for example: https://<resource>.openai.azure.com/openai/v1).", | ||
| ); | ||
| } | ||
|
|
||
| if (!isAzureHost(parsed.hostname)) { | ||
| throw new Error( | ||
| "Azure OpenAI base URL must use an Azure host (*.openai.azure.com, *.services.ai.azure.com, or *.cognitiveservices.azure.com).", | ||
| ); | ||
| } | ||
|
|
||
| return `${parsed.origin}/openai/v1`; |
There was a problem hiding this comment.
Missing HTTPS protocol enforcement
normalizeAzureOpenAIBaseUrl validates the hostname but never checks that the scheme is https:. An http:// URL would pass the isAzureHost check and be returned as-is, meaning the stored baseUrl (and any subsequent API calls) would use plain HTTP — transmitting the API key in cleartext.
Since the PR explicitly includes security hardening (host suffix validation + URL normalization), adding a protocol check here is consistent with that goal:
| let parsed: URL; | |
| try { | |
| parsed = new URL(candidate); | |
| } catch { | |
| throw new Error( | |
| "Azure OpenAI base URL must be a valid URL (for example: https://<resource>.openai.azure.com/openai/v1).", | |
| ); | |
| } | |
| if (!isAzureHost(parsed.hostname)) { | |
| throw new Error( | |
| "Azure OpenAI base URL must use an Azure host (*.openai.azure.com, *.services.ai.azure.com, or *.cognitiveservices.azure.com).", | |
| ); | |
| } | |
| return `${parsed.origin}/openai/v1`; | |
| if (!isAzureHost(parsed.hostname)) { | |
| throw new Error( | |
| "Azure OpenAI base URL must use an Azure host (*.openai.azure.com, *.services.ai.azure.com, or *.cognitiveservices.azure.com).", | |
| ); | |
| } | |
| if (parsed.protocol !== "https:") { | |
| throw new Error( | |
| "Azure OpenAI base URL must use HTTPS (received: " + parsed.protocol.replace(":", "") + ").", | |
| ); | |
| } | |
| return `${parsed.origin}/openai/v1`; |
The test suite in azure-openai-config.test.ts should also cover this case with a dedicated it("rejects non-HTTPS base URLs", ...) assertion.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/azure-openai-config.ts
Line: 43-58
Comment:
**Missing HTTPS protocol enforcement**
`normalizeAzureOpenAIBaseUrl` validates the hostname but never checks that the scheme is `https:`. An `http://` URL would pass the `isAzureHost` check and be returned as-is, meaning the stored `baseUrl` (and any subsequent API calls) would use plain HTTP — transmitting the API key in cleartext.
Since the PR explicitly includes security hardening (host suffix validation + URL normalization), adding a protocol check here is consistent with that goal:
```suggestion
if (!isAzureHost(parsed.hostname)) {
throw new Error(
"Azure OpenAI base URL must use an Azure host (*.openai.azure.com, *.services.ai.azure.com, or *.cognitiveservices.azure.com).",
);
}
if (parsed.protocol !== "https:") {
throw new Error(
"Azure OpenAI base URL must use HTTPS (received: " + parsed.protocol.replace(":", "") + ").",
);
}
return `${parsed.origin}/openai/v1`;
```
The test suite in `azure-openai-config.test.ts` should also cover this case with a dedicated `it("rejects non-HTTPS base URLs", ...)` assertion.
How can I resolve this? If you propose a fix, please make it concise.| export function applyAzureOpenAIConfig( | ||
| cfg: OpenClawConfig, | ||
| params: { baseUrl: string; modelId: string; apiVersion?: string }, | ||
| ): OpenClawConfig { | ||
| const next = applyAzureOpenAIProviderConfig(cfg, params); | ||
| return applyAgentDefaultModelPrimary(next, `${AZURE_OPENAI_PROVIDER_ID}/${params.modelId.trim()}`); | ||
| } |
There was a problem hiding this comment.
Normalization bypass — raw .trim() instead of normalizeAzureOpenAIModelId
applyAzureOpenAIConfig passes params.modelId.trim() directly to applyAgentDefaultModelPrimary while applyAzureOpenAIProviderConfig (called one line above) stores the model under normalizeAzureOpenAIModelId(params.modelId) as its key.
Today both produce the same result since normalizeAzureOpenAIModelId only does .trim(). However, if normalization ever gains additional transforms (e.g. lower-casing, stripping deployment-path prefixes), the key written by applyAzureOpenAIProviderConfig and the reference written here would silently diverge, breaking the default-model lookup at runtime.
| export function applyAzureOpenAIConfig( | |
| cfg: OpenClawConfig, | |
| params: { baseUrl: string; modelId: string; apiVersion?: string }, | |
| ): OpenClawConfig { | |
| const next = applyAzureOpenAIProviderConfig(cfg, params); | |
| return applyAgentDefaultModelPrimary(next, `${AZURE_OPENAI_PROVIDER_ID}/${params.modelId.trim()}`); | |
| } | |
| export function applyAzureOpenAIConfig( | |
| cfg: OpenClawConfig, | |
| params: { baseUrl: string; modelId: string; apiVersion?: string }, | |
| ): OpenClawConfig { | |
| const next = applyAzureOpenAIProviderConfig(cfg, params); | |
| return applyAgentDefaultModelPrimary(next, `${AZURE_OPENAI_PROVIDER_ID}/${normalizeAzureOpenAIModelId(params.modelId)}`); | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/azure-openai-config.ts
Line: 126-132
Comment:
**Normalization bypass — raw `.trim()` instead of `normalizeAzureOpenAIModelId`**
`applyAzureOpenAIConfig` passes `params.modelId.trim()` directly to `applyAgentDefaultModelPrimary` while `applyAzureOpenAIProviderConfig` (called one line above) stores the model under `normalizeAzureOpenAIModelId(params.modelId)` as its key.
Today both produce the same result since `normalizeAzureOpenAIModelId` only does `.trim()`. However, if normalization ever gains additional transforms (e.g. lower-casing, stripping deployment-path prefixes), the key written by `applyAzureOpenAIProviderConfig` and the reference written here would silently diverge, breaking the default-model lookup at runtime.
```suggestion
export function applyAzureOpenAIConfig(
cfg: OpenClawConfig,
params: { baseUrl: string; modelId: string; apiVersion?: string },
): OpenClawConfig {
const next = applyAzureOpenAIProviderConfig(cfg, params);
return applyAgentDefaultModelPrimary(next, `${AZURE_OPENAI_PROVIDER_ID}/${normalizeAzureOpenAIModelId(params.modelId)}`);
}
```
How can I resolve this? If you propose a fix, please make it concise.
src/commands/azure-openai-config.ts
Outdated
| baseUrl, | ||
| api: "openai-responses", | ||
| ...(normalizedApiKey ? { apiKey: normalizedApiKey } : {}), | ||
| models: mergedModels.length > 0 ? mergedModels : [defaultModel], |
There was a problem hiding this comment.
Unreachable fallback — mergedModels is always non-empty
mergedModels is constructed as either existingModels (when the modelId is already present, meaning it has at least one element) or [...existingModels, defaultModel] (which always appends defaultModel). Both branches produce an array with at least one element, so the ? mergedModels : [defaultModel] fallback here can never be reached. The line can be simplified to just models: mergedModels.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/commands/azure-openai-config.ts
Line: 117
Comment:
**Unreachable fallback — `mergedModels` is always non-empty**
`mergedModels` is constructed as either `existingModels` (when the `modelId` is already present, meaning it has at least one element) or `[...existingModels, defaultModel]` (which always appends `defaultModel`). Both branches produce an array with at least one element, so the `? mergedModels : [defaultModel]` fallback here can never be reached. The line can be simplified to just `models: mergedModels`.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e6a0b45bc3
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (!isAzureHost(parsed.hostname)) { | ||
| throw new Error( | ||
| "Azure OpenAI base URL must use an Azure host (*.openai.azure.com, *.services.ai.azure.com, or *.cognitiveservices.azure.com).", | ||
| ); |
There was a problem hiding this comment.
Reject non-HTTPS Azure OpenAI base URLs
normalizeAzureOpenAIBaseUrl currently validates only the hostname suffix and then returns parsed.origin, so inputs like http://<resource>.openai.azure.com pass validation and become persisted provider endpoints. In that misconfiguration path, API-key authenticated requests can be sent over plaintext HTTP (or through insecure redirects/proxies), which is an avoidable credential-exposure risk for Azure onboarding flows.
Useful? React with 👍 / 👎.
| !(await maybeSetResolvedApiKey(resolved, (value) => | ||
| setAzureOpenaiApiKey(value, undefined, apiKeyStorageOptions), | ||
| )) |
There was a problem hiding this comment.
Validate Azure required flags before writing credentials
In non-interactive Azure onboarding, the API key is persisted via setAzureOpenaiApiKey before validating --azure-openai-base-url and --azure-openai-model-id. If either flag is missing/invalid, the command exits with an error but has already mutated auth profiles, so a failed run can still overwrite or create Azure credentials unexpectedly.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 53ed5bc2a4
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const noteAgentModel = createAuthChoiceAgentModelNoter(params); | ||
| const requestedSecretInputMode = normalizeSecretInputModeInput(params.opts?.secretInputMode); | ||
|
|
||
| await ensureApiKeyFromOptionEnvOrPrompt({ |
There was a problem hiding this comment.
Validate Azure inputs before persisting API key
The handler writes credentials via ensureApiKeyFromOptionEnvOrPrompt before validating azureOpenaiBaseUrl/azureOpenaiModelId. In interactive onboarding/configure flows, an invalid CLI-provided Azure URL or model ID will throw during normalization after the key has already been saved, so a failed run still mutates auth profiles. Reordering validation ahead of credential writes (or rolling back on failure) would prevent this partial-state side effect.
Useful? React with 👍 / 👎.
| ); | ||
| } | ||
|
|
||
| return `${parsed.origin}/openai/v1`; |
There was a problem hiding this comment.
Preserve api-version query or reject it explicitly
normalizeAzureOpenAIBaseUrl unconditionally returns ${parsed.origin}/openai/v1, which discards any query string on user input. When users provide a full Azure endpoint containing ?api-version=... (notably preview deployments), onboarding silently strips that version and runtime calls default back to v1, which can break preview-only setups with no clear explanation. This should either preserve/migrate api-version into azureApiVersion or fail fast when query parameters are present.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 078f0280d9
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| provider: "azure-openai-responses", | ||
| cfg: baseConfig, | ||
| flagValue: opts.azureOpenaiApiKey, | ||
| flagName: "--azure-openai-api-key", | ||
| envVar: "AZURE_OPENAI_API_KEY", |
There was a problem hiding this comment.
Add Azure env-key discovery for non-interactive auth
This new Azure branch relies on resolveApiKey(...), which in turn uses resolveNonInteractiveApiKey/resolveEnvApiKey to detect keys from environment variables, but azure-openai-responses is not registered in src/agents/model-auth-env-vars.ts (PROVIDER_ENV_API_KEY_CANDIDATES). As a result, openclaw onboard --non-interactive --auth-choice azure-openai-api-key fails with “Missing --azure-openai-api-key (or AZURE_OPENAI_API_KEY in env...)” even when AZURE_OPENAI_API_KEY is set, breaking CI/automation flows that intentionally avoid passing secrets on the command line.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a776132ed8
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| await ensureApiKeyFromOptionEnvOrPrompt({ | ||
| token: params.opts?.azureOpenaiApiKey, | ||
| tokenProvider: "azure-openai-responses", | ||
| secretInputMode: requestedSecretInputMode, |
There was a problem hiding this comment.
Reject ref mode when Azure key is provided inline
This branch accepts --secret-input-mode ref together with --azure-openai-api-key and immediately routes the inline key through ensureApiKeyFromOptionEnvOrPrompt, which stores a reference credential instead of validating that AZURE_OPENAI_API_KEY is actually present in the environment. In that scenario onboarding reports success but the saved profile points to a missing env var, so auth fails later at runtime with a hard-to-diagnose broken configuration. The non-interactive Azure path already guards this combination, so the interactive path should apply the same validation.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d2b9ddb24c
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const mergedModels = existingModels.some((model) => model.id === modelId) | ||
| ? existingModels | ||
| : [...existingModels, defaultModel]; |
There was a problem hiding this comment.
Backfill Azure model compat when model ID already exists
When onboarding updates a config that already contains azure-openai-responses/<modelId>, this branch keeps existingModels unchanged and skips the compat: { supportsStore: false } default from buildAzureModelDefinition. In that migration path, runtime logic in src/agents/pi-embedded-runner/extra-params.ts (shouldForceResponsesStore) will still treat the model as store-capable and force store=true for Azure Responses calls, so rerunning onboarding does not actually fix existing Azure model entries that need store disabled.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed in a28758c: existing azure-openai-responses model entries now backfill compat.supportsStore-false when modelId already exists,with regression test coverage in src/commands/azure-openai-config.test.ts.
cbf706d to
6627906
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 13f8a55604
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| provider: "azure-openai-responses", | ||
| cfg: baseConfig, | ||
| flagValue: opts.azureOpenaiApiKey, | ||
| flagName: "--azure-openai-api-key", | ||
| envVar: "AZURE_OPENAI_API_KEY", |
There was a problem hiding this comment.
Pass explicit Azure env var name to resolveApiKey
Fresh evidence in this commit shows the Azure non-interactive path calls resolveApiKey without envVarName, so resolveNonInteractiveApiKey cannot read AZURE_OPENAI_API_KEY unless the provider is registered in its env-candidate map. In this changeset, that means openclaw onboard --non-interactive --auth-choice azure-openai-api-key still errors as missing credentials even when AZURE_OPENAI_API_KEY is set, which breaks CI/automation flows that avoid putting secrets on the command line.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed!
Please accept this PR as soon as possible, otherwise I will have to resolve conflicts and handle it every day.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ee023e5834
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| tokenProvider: "azure-openai-responses", | ||
| secretInputMode: requestedSecretInputMode, | ||
| config: nextConfig, | ||
| expectedProviders: ["azure-openai-responses"], | ||
| provider: "azure-openai-responses", |
There was a problem hiding this comment.
Register Azure env key for interactive auth discovery
This new Azure flow delegates key lookup to ensureApiKeyFromOptionEnvOrPrompt, which in turn relies on resolveEnvApiKey(provider) for env discovery; however azure-openai-responses is still not in src/agents/model-auth-env-vars.ts. Even though non-interactive onboarding now works via an explicit envVarName workaround, interactive onboarding with AZURE_OPENAI_API_KEY set will still miss the existing key and force manual entry, and env-only Azure auth resolution remains inconsistent with other API-key providers.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed!
Please accept this PR as soon as possible, otherwise I will have to resolve conflicts and handle it every day.
|
Please accept this PR as soon as possible, otherwise I will have to resolve conflicts and handle it every day. |
f3fba5c to
3b8f002
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3b8f002e41
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| normalized.endsWith(".services.ai.azure.com") || | ||
| normalized.endsWith(".cognitiveservices.azure.com") |
There was a problem hiding this comment.
Keep Azure host allowlist consistent with stream wrappers
This change accepts *.services.ai.azure.com and *.cognitiveservices.azure.com as valid Azure base URLs, but direct-OpenAI runtime detection still only matches *.openai.azure.com in src/agents/pi-embedded-runner/openai-stream-wrappers.ts (isDirectOpenAIBaseUrl). As a result, newly accepted Azure endpoints bypass the OpenAI-responses wrapper path (shouldForceResponsesStore and related payload normalization), so they do not get the same Azure handling as existing *.openai.azure.com setups and can still hit Azure-specific response failures in configurations that rely on that wrapper behavior.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed!
Please accept this PR as soon as possible, otherwise I will have to resolve conflicts and handle it every day.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6c32ad7527
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| normalized.endsWith(".services.ai.azure.com") || | ||
| normalized.endsWith(".cognitiveservices.azure.com") |
There was a problem hiding this comment.
Keep Azure host allowlists consistent across runtime wrappers
This change now accepts *.services.ai.azure.com and *.cognitiveservices.azure.com as valid Azure base URLs, but src/agents/pi-embedded-runner/openai-stream-wrappers.ts still treats only *.openai.azure.com as a direct OpenAI/Azure endpoint in isDirectOpenAIBaseUrl. In configurations that rely on that direct-endpoint path (for example the shouldForceResponsesStore / Responses payload handling path), newly accepted Azure hosts will be classified differently at runtime than legacy Azure hosts, leading to inconsistent request behavior for the same provider.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed!
Please accept this PR as soon as possible, otherwise I will have to resolve conflicts and handle it every day.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 090a38529a
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const noteAgentModel = createAuthChoiceAgentModelNoter(params); | ||
| const requestedSecretInputMode = normalizeSecretInputModeInput(params.opts?.secretInputMode); | ||
|
|
||
| await ensureApiKeyFromOptionEnvOrPrompt({ |
There was a problem hiding this comment.
Validate Azure URL/model before storing API key
applyAuthChoiceAzureOpenAI persists credentials through ensureApiKeyFromOptionEnvOrPrompt before it validates azureOpenaiBaseUrl and azureOpenaiModelId. When users provide an invalid --azure-openai-base-url or --azure-openai-model-id, the command exits with a validation error later in the flow, but the Azure API key has already been written to auth profiles, leaving partial state after a failed onboarding run.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed!
Please accept this PR as soon as possible, otherwise I will have to resolve conflicts and handle it every day.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b81b773a28
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| optionKey: "azureOpenaiApiKey", | ||
| authChoice: "azure-openai-api-key", | ||
| cliFlag: "--azure-openai-api-key", | ||
| cliOption: "--azure-openai-api-key <key>", | ||
| description: "Azure OpenAI API key", |
There was a problem hiding this comment.
Add Azure auth choice to grouped onboarding providers
Adding azure-openai-api-key here makes it available in the flat auth-choice list, but interactive onboarding uses promptAuthChoiceGrouped/buildAuthChoiceGroups, which only renders choices mapped in AUTH_CHOICE_GROUP_DEFS (plus plugin-provided groups). Because this new built-in choice was not added to a static group, openclaw onboard users who rely on the grouped provider picker cannot select Azure OpenAI at all unless they pass --auth-choice manually, so the new provider path is effectively inaccessible in the default interactive flow.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 979e26e2ba
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| authChoice: "azure-openai-api-key", | ||
| cliFlag: "--azure-openai-api-key", | ||
| cliOption: "--azure-openai-api-key <key>", | ||
| description: "Azure OpenAI API key", |
There was a problem hiding this comment.
Add Azure auth choice to grouped provider picker
This adds azure-openai-api-key to the flat auth-flag catalog, but interactive onboarding/configure flows render choices from grouped definitions (buildAuthChoiceGroups → AUTH_CHOICE_GROUP_DEFS) in src/commands/auth-choice-prompt.ts, and no Azure group mapping was added there. As a result, users in the default grouped picker cannot select Azure OpenAI unless they already know to pass --auth-choice azure-openai-api-key manually, so the new provider path is effectively hidden in normal interactive setup.
Useful? React with 👍 / 👎.
Summary
*.cognitiveservices.azure.com); and there was no configurable API version for preview endpoints.azure-openai-api-key(interactive + non-interactive).--azure-openai-base-url,--azure-openai-model-id,--azure-openai-api-version(defaultv1, optional preview override).*.openai.azure.com,*.services.ai.azure.com, and*.cognitiveservices.azure.com.azureApiVersionin model params when non-default and forwarded it at runtime via stream options.Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
User-visible / Behavior Changes
openclaw onboard --auth-choice azure-openai-api-key--azure-openai-base-url--azure-openai-model-id--azure-openai-api-version(defaults tov1)azure-openai-api-keynow requires base URL + model ID and fails fast with a clear error if missing.*.cognitiveservices.azure.com.Security Impact (required)
Yes/No)Yes/No)Yes/No)Yes/No)Yes/No)Yes, explain risk + mitigation:AZURE_OPENAI_API_KEYhandling through existing secret paths (plaintext/ref), with no new plaintext leak surface.Repro + Verification
Environment
azure-openai-responses/gpt-5.4AZURE_OPENAI_API_KEY,azureOpenaiBaseUrl,azureOpenaiModelId,azureApiVersionSteps
azure-openai-api-key.--azure-openai-api-version.Expected
azure-openai-responsesprovider config is generated.apiVersionis configurable and forwarded.Actual
Evidence
Additional result:
142 passed.Human Verification (required)
What you personally verified (not just CI), and how:
*.cognitiveservices.azure.com).--azure-openai-api-versiondefault (v1) and preview override behavior.Compatibility / Migration
Yes/No)Yes/No)Yes/No)Failure Recovery (if this breaks)
azure-openai-responsesand switch back to existing provider refs (openai/*,openrouter/*, etc.).models.providers["azure-openai-responses"]and relatedagents.defaults.modelsentries.Risks and Mitigations
v1, and explicit docs for v1 vs preview setup.