Skip to content

fix(onboard): respect Azure Foundry custom provider compatibility#50535

Merged
obviyus merged 2 commits intomainfrom
codex/fix-50528-services-ai-custom-provider
Mar 19, 2026
Merged

fix(onboard): respect Azure Foundry custom provider compatibility#50535
obviyus merged 2 commits intomainfrom
codex/fix-50528-services-ai-custom-provider

Conversation

@obviyus
Copy link
Copy Markdown
Contributor

@obviyus obviyus commented Mar 19, 2026

Summary

  • keep *.services.ai.azure.com custom providers on the selected compatibility path
  • keep classic *.openai.azure.com custom providers on the Azure Responses path
  • add regression coverage for Foundry chat-completions verification and persisted config shape

Problem

PR #49543 fixed Azure OpenAI custom-provider onboarding for classic *.openai.azure.com endpoints, but it also forced every *.services.ai.azure.com endpoint onto openai-responses.

That breaks valid Azure AI Foundry chat-completions endpoints from other providers after onboarding.

Fixes #50528.

Testing

  • pnpm test -- src/commands/onboard-custom.test.ts

@openclaw-barnacle openclaw-barnacle bot added commands Command implementations size: S labels Mar 19, 2026
@obviyus obviyus self-assigned this Mar 19, 2026
@openclaw-barnacle openclaw-barnacle bot added the maintainer Maintainer-authored PR label Mar 19, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 19, 2026

Greptile Summary

This PR correctly fixes a regression introduced in #49543 where all *.services.ai.azure.com (Azure AI Foundry) custom providers were forced onto the openai-responses API path, breaking chat-completions–based Foundry models. The fix splits the old isAzureUrl helper into isAzureFoundryUrl and isAzureOpenAiUrl, and threads the more-specific predicates into the two affected code paths — the verification probe endpoint selection and the persisted api field in config.

Key changes:

  • *.openai.azure.com (classic Azure OpenAI) → continues to use the Responses API (openai-responses) and probes against the /responses endpoint.
  • *.services.ai.azure.com (Azure AI Foundry) → now respects the user-selected compatibility, defaulting to openai-completions and probing against chat/completions.
  • Azure-wide concerns (auth-header suppression, api-key headers, larger context-window defaults, reasoning-model detection) are correctly preserved for both URL families via the combined isAzureUrl guard.
  • A new integration test validates the Foundry probe URL, headers, and body shape; a renamed unit test confirms the persisted config shape.

Confidence Score: 5/5

  • This PR is safe to merge — it is a targeted, well-tested bug fix with no risk of regressions to the classic Azure OpenAI path.
  • The change is minimal and surgical: a single boolean predicate is split into two, and the correct specialised predicate is used in each of the two affected call sites. Existing tests for classic Azure OpenAI still pass unchanged, and two new tests directly cover the Foundry regression. The only finding is a stale code comment, which does not affect runtime behaviour.
  • No files require special attention.

Comments Outside Diff (1)

  1. src/commands/onboard-custom.ts, line 22-24 (link)

    P2 Stale comment on Azure defaults constants

    The comment says "Azure OpenAI uses the Responses API which supports larger defaults", but AZURE_DEFAULT_CONTEXT_WINDOW and AZURE_DEFAULT_MAX_TOKENS are now applied to both classic *.openai.azure.com and Foundry *.services.ai.azure.com URLs (since the isAzure guard on line 648 still covers both). The comment is now misleading because Foundry endpoints aren't on the Responses API path.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: src/commands/onboard-custom.ts
    Line: 22-24
    
    Comment:
    **Stale comment on Azure defaults constants**
    
    The comment says "Azure OpenAI uses the Responses API which supports larger defaults", but `AZURE_DEFAULT_CONTEXT_WINDOW` and `AZURE_DEFAULT_MAX_TOKENS` are now applied to both classic `*.openai.azure.com` **and** Foundry `*.services.ai.azure.com` URLs (since the `isAzure` guard on line 648 still covers both). The comment is now misleading because Foundry endpoints aren't on the Responses API path.
    
    
    
    How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/commands/onboard-custom.ts
Line: 22-24

Comment:
**Stale comment on Azure defaults constants**

The comment says "Azure OpenAI uses the Responses API which supports larger defaults", but `AZURE_DEFAULT_CONTEXT_WINDOW` and `AZURE_DEFAULT_MAX_TOKENS` are now applied to both classic `*.openai.azure.com` **and** Foundry `*.services.ai.azure.com` URLs (since the `isAzure` guard on line 648 still covers both). The comment is now misleading because Foundry endpoints aren't on the Responses API path.

```suggestion
// Both Azure OpenAI and Azure AI Foundry use larger context defaults
const AZURE_DEFAULT_CONTEXT_WINDOW = 400_000;
const AZURE_DEFAULT_MAX_TOKENS = 16_384;
```

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "fix: add changelog a..."

@obviyus obviyus merged commit f1e4f8e into main Mar 19, 2026
13 checks passed
@obviyus obviyus deleted the codex/fix-50528-services-ai-custom-provider branch March 19, 2026 16:37
@obviyus
Copy link
Copy Markdown
Contributor Author

obviyus commented Mar 19, 2026

Landed on main.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2a47f95421

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

? buildAzureOpenAiHeaders(params.apiKey)
: buildOpenAiHeaders(params.apiKey);
if (isBaseUrlAzureUrl) {
if (isAzureOpenAiUrl(params.baseUrl)) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Accept already-versioned Foundry URLs in OpenAI verification

When a user enters a *.services.ai.azure.com/openai/v1 base URL—the normalized shape we already persist for Azure endpoints—this new branch sends the probe through resolveVerificationEndpoint() instead of the old responses path. transformAzureUrl() only treats existing /openai/deployments/... paths as special, so it rewrites that input to .../openai/v1/openai/deployments/<model>/chat/completions, which will fail verification. Re-running onboarding against an existing saved Foundry provider now breaks unless the operator manually strips /openai/v1 first.

Useful? React with 👍 / 👎.

Comment on lines +689 to 691
const providerApi = isAzureOpenAi
? ("openai-responses" as const)
: resolveProviderApi(params.compatibility);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Separate Foundry providers when compatibilities differ

This line makes *.services.ai.azure.com providers honor the selected adapter, but resolveCustomProviderId() still deduplicates Azure providers only by normalized host/base URL. If someone onboards an OpenAI-compatible Foundry deployment and then an Anthropic-compatible deployment on the same resource without an explicit providerId, both models reuse the same provider record and this assignment flips models.providers[providerId].api for every previously-added model. The first model will then run under the wrong adapter on the next request.

Useful? React with 👍 / 👎.

veryoung pushed a commit to veryoung/openclaw that referenced this pull request Mar 20, 2026
fuller-stack-dev pushed a commit to fuller-stack-dev/openclaw that referenced this pull request Mar 20, 2026
fuller-stack-dev pushed a commit to fuller-stack-dev/openclaw that referenced this pull request Mar 20, 2026
pholpaphankorn pushed a commit to pholpaphankorn/openclaw that referenced this pull request Mar 22, 2026
frankekn pushed a commit to artwalker/openclaw that referenced this pull request Mar 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

commands Command implementations maintainer Maintainer-authored PR size: S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Custom provider onboarding forces openai-responses on all *.services.ai.azure.com endpoints

1 participant