fix(onboard): respect Azure Foundry custom provider compatibility#50535
fix(onboard): respect Azure Foundry custom provider compatibility#50535
Conversation
Greptile SummaryThis PR correctly fixes a regression introduced in #49543 where all Key changes:
Confidence Score: 5/5
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2a47f95421
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| ? buildAzureOpenAiHeaders(params.apiKey) | ||
| : buildOpenAiHeaders(params.apiKey); | ||
| if (isBaseUrlAzureUrl) { | ||
| if (isAzureOpenAiUrl(params.baseUrl)) { |
There was a problem hiding this comment.
Accept already-versioned Foundry URLs in OpenAI verification
When a user enters a *.services.ai.azure.com/openai/v1 base URL—the normalized shape we already persist for Azure endpoints—this new branch sends the probe through resolveVerificationEndpoint() instead of the old responses path. transformAzureUrl() only treats existing /openai/deployments/... paths as special, so it rewrites that input to .../openai/v1/openai/deployments/<model>/chat/completions, which will fail verification. Re-running onboarding against an existing saved Foundry provider now breaks unless the operator manually strips /openai/v1 first.
Useful? React with 👍 / 👎.
| const providerApi = isAzureOpenAi | ||
| ? ("openai-responses" as const) | ||
| : resolveProviderApi(params.compatibility); |
There was a problem hiding this comment.
Separate Foundry providers when compatibilities differ
This line makes *.services.ai.azure.com providers honor the selected adapter, but resolveCustomProviderId() still deduplicates Azure providers only by normalized host/base URL. If someone onboards an OpenAI-compatible Foundry deployment and then an Anthropic-compatible deployment on the same resource without an explicit providerId, both models reuse the same provider record and this assignment flips models.providers[providerId].api for every previously-added model. The first model will then run under the wrong adapter on the next request.
Useful? React with 👍 / 👎.
Summary
*.services.ai.azure.comcustom providers on the selected compatibility path*.openai.azure.comcustom providers on the Azure Responses pathProblem
PR #49543 fixed Azure OpenAI custom-provider onboarding for classic
*.openai.azure.comendpoints, but it also forced every*.services.ai.azure.comendpoint ontoopenai-responses.That breaks valid Azure AI Foundry chat-completions endpoints from other providers after onboarding.
Fixes #50528.
Testing
pnpm test -- src/commands/onboard-custom.test.ts