Conversation
|
This pull request has been automatically marked as stale due to inactivity. |
d832ac4 to
d897c72
Compare
d897c72 to
9b0e1b6
Compare
497403d to
a5b3691
Compare
Greptile SummaryThis PR adds a new
Confidence Score: 3/5
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a5b3691699
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
32473ac to
400859e
Compare
|
All four issues addressed in latest force-push:
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7a7a79644b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
3fd9eb7 to
408ce99
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c1d8d0e372
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
98a9b6f to
9fe4038
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9fe4038aa9
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
9fe4038 to
0138b7b
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0138b7bff2
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
af19638 to
8bb14bc
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8bb14bc93b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
fdd130c to
d8ed5fb
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d8ed5fb136
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| @@ -409,6 +423,16 @@ export const TtsConfigSchema = z | |||
| }) | |||
| .strict() | |||
| .optional(), | |||
| azure: z | |||
| .object({ | |||
| apiKey: z.string().optional().register(sensitive), | |||
There was a problem hiding this comment.
Allow SecretInput for Azure TTS API keys
The new Azure TTS config only accepts apiKey as a plain string here, while other TTS providers use SecretInputSchema; this means object-style secret refs (for example env/file/exec secret refs) are rejected for messages.tts.azure.apiKey even though the rest of TTS supports them. In setups that keep provider keys in secret providers instead of plaintext config/env, Azure TTS cannot be configured and falls back or fails unexpectedly.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed — changed apiKey from z.string() to SecretInputSchema so object-style secret refs (env/file/exec) are accepted for Azure TTS, consistent with other TTS providers.
| const openaiBaseUrl = `${trimmedEndpoint}/openai/v1`; | ||
| const anthropicBaseUrl = `${trimmedEndpoint}/anthropic`; |
There was a problem hiding this comment.
Normalize Azure endpoint before composing OpenAI model URLs
This composes openaiBaseUrl by unconditionally appending /openai/v1, so if the configured endpoint already uses an OpenAI-compatible suffix (for example https://.../openai/v1), model base URLs become .../openai/v1/openai/v1 and chat requests 404. The same endpoint pattern is accepted elsewhere (e.g. Azure audio URL normalization), so this path should strip /openai/v1//v1 before building model URLs.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed — buildAzureFoundryProvider now strips /openai/v1 and /v\d+ suffixes before composing model URLs, matching the same normalization applied in buildTranscriptionUrl for audio.
d8ed5fb to
1f4452d
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1f4452d642
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| voice: raw.openai?.voice ?? DEFAULT_OPENAI_VOICE, | ||
| }, | ||
| azure: { | ||
| apiKey: raw.azure?.apiKey, |
There was a problem hiding this comment.
Resolve Azure TTS secret refs before using apiKey
messages.tts.azure.apiKey is validated as SecretInput, but this assignment keeps the raw value instead of resolving it like the OpenAI/ElevenLabs branches do. If users configure an env/file/exec secret ref object, resolveTtsApiKey returns that object and azureTTS sends an api-key header with a non-secret string representation (for example [object Object]), which causes Azure auth to fail for that configuration path.
Useful? React with 👍 / 👎.
| const baseUrl = endpoint.replace(/\/+$/, ""); | ||
| const encodedVersion = encodeURIComponent(apiVersion); | ||
| const url = `${baseUrl}/openai/deployments/${encodeURIComponent(model)}/audio/speech?api-version=${encodedVersion}`; |
There was a problem hiding this comment.
Normalize Azure TTS endpoint before building deployment URL
This URL builder only strips trailing slashes before appending /openai/deployments/..., so an OpenAI-compatible endpoint like https://.../openai/v1 produces .../openai/v1/openai/deployments/... and 404s. The same suffix-normalization concern is already handled in other Azure paths, so this path should also strip /openai/v1 (or /v1) before composing the deployment URL.
Useful? React with 👍 / 👎.
| const endpoint = | ||
| discoveryConfig?.endpoint?.trim() || | ||
| resolveAzureFoundryEndpointEnv(env)?.value || | ||
| "https://models.inference.ai.azure.com"; |
There was a problem hiding this comment.
Strip OpenAI suffix before Azure discovery endpoint reuse
Fresh evidence: endpoint normalization was added for the static Azure provider builder, but the discovery resolver still reuses the raw endpoint. If azureFoundryDiscovery.endpoint (or env alias) is set to an OpenAI-compatible URL ending in /openai/v1, discovery requests hit .../openai/v1/models and discovered model defaults are later built from the same suffixed base, so discovery can silently return no models or generate broken model base URLs.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 791796a2ab
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| }; | ||
| /** Azure AI Foundry TTS configuration. */ | ||
| azure?: { | ||
| apiKey?: string; |
There was a problem hiding this comment.
Use SecretInput for Azure TTS apiKey typing
messages.tts.azure.apiKey is typed as string here, but the runtime schema now accepts SecretInput (env/file/exec refs), so TypeScript config users cannot supply object-style secret refs without type errors even though runtime validation supports them. This creates a contract mismatch between OpenClawConfig typing and actual config behavior, and it blocks non-plaintext Azure TTS credentials in typed configs.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 09cdaba6b4
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| return config.openai.apiKey || process.env.OPENAI_API_KEY; | ||
| } | ||
| if (provider === "azure") { | ||
| return config.azure.apiKey || resolveAzureFoundryApiKeyEnv(process.env)?.value; |
There was a problem hiding this comment.
Normalize Azure TTS apiKey before env fallback
resolveTtsApiKey returns config.azure.apiKey verbatim, so a whitespace-only configured value (for example from an empty template substitution) is treated as a valid key and prevents fallback to AZURE_FOUNDRY_API_KEY/aliases. In that scenario Azure TTS requests are sent with an invalid api-key header and fail authentication even though a valid env key is present. This branch should trim/normalize the configured key before deciding whether to use env fallback.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b1b4925347
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| const response = await fetchFn(url, { | ||
| method: "GET", | ||
| headers: { | ||
| "api-key": params.apiKey, | ||
| Accept: "application/json", |
There was a problem hiding this comment.
Add request timeout to Azure discovery fetch
discoverAzureFoundryModels issues the /models call without any timeout or abort signal, so a stalled Azure endpoint (for example, dropped packets or a hanging proxy) can block implicit provider resolution for an unbounded time during startup/model-config generation. Other discovery paths in this repo use bounded AbortSignal.timeout(...), so this path should also enforce a timeout to avoid long hangs.
Useful? React with 👍 / 👎.
| if (models.length === 0) { | ||
| return null; |
There was a problem hiding this comment.
Preserve empty Azure discovery results
Returning null when discovery yields zero models causes the caller to fall back to the preloaded static Azure catalog, which means an explicit providerFilter (or any valid zero-result discovery state) is silently ignored and unavailable models are reintroduced. This should return an explicit Azure provider result (with the discovered model list, even if empty) so discovery output is respected.
Useful? React with 👍 / 👎.
|
@steipete I make everything clean for merge but by the time someone looks at it, its already too late merge conflicts appear. This project is amazing and I would like to support it with my azure cloud expertise. |
|
This pull request has been automatically marked as stale due to inactivity. |
|
Closing this stale Microsoft-tracker item for cleanup. If this is still an issue or still worth pursuing, please re-open it. We now have dedicated Microsoft maintainers watching this area. |
Summary
leaving users who rely on Azure's model hosting unable to use the platform.
Phi-4, Llama, Mistral, and Cohere models via a unified inference endpoint — a significant
deployment target for enterprise users.
speech-to-text (STT via Whisper/transcribe models), text-to-speech (TTS), and dynamic
model discovery from the /models API. Includes 12-model built-in catalog, Zod config
schema, provider auto-detection from AZURE_FOUNDRY_API_KEY, and query parameter support
for Azure's api-version requirement.
modifications to the gateway, agent orchestration, UI, or any non-Azure code paths.
OpenAI audio provider received only a minor additive change (query param passthrough
support).
Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
User-visible / Behavior Changes
https://models.inference.ai.azure.com)
provider filtering and cache refresh interval
discovered and available
all resolve to azure-foundry
Security Impact (required)
model discovery
auth profiles). All network calls use fetchWithTimeoutGuarded with SSRF guards. STT/TTS
use api-key header auth (Azure convention); chat uses Authorization: Bearer. Error
responses are truncated to 300 chars to prevent leaking large payloads. No API keys are
placed in URLs.
Repro + Verification
Environment
models:
azureFoundryDiscovery:
enabled: true
env: AZURE_FOUNDRY_API_KEY=
Steps
Expected
Actual
Evidence
Test results: 15 tests passing across 3 test suites:
handling)
handling)
Human Verification (required)
(0 errors), media/store tests pass (19 tests)
override vs default, api-key header auth for cognitive services endpoints, model
discovery cache expiration, provider filter matching
playback, full end-to-end gateway flow
Compatibility / Migration
AZURE_FOUNDRY_ENDPOINT) and config section (azureFoundryDiscovery). No existing config is
affected.
Failure Recovery (if this breaks)
auto-detection will skip Azure Foundry entirely.
opt-in via API key.
returns empty array (silent degradation). Chat errors surface as Azure Foundry chat
completion failed (HTTP xxx).
Risks and Mitigations
12-model catalog serves as fallback, and errors are logged via subsystem logger.
adopted early builds.
The model-selection.ts normalizer maps all old provider ID variants to azure-foundry.
Greptile Summary
This PR adds comprehensive Azure AI Foundry provider support to OpenClaw, including chat completions (LLM), speech-to-text (STT), and text-to-speech (TTS) capabilities. The implementation follows established patterns from existing providers.
Key Changes
azure-foundryprovider with chat completions support via OpenAI-compatible endpoint/modelsAPI with caching and filteringapi-keyheader authazure,azureai,azure-ai,azure-ai-foundry→azure-foundry)Notable Implementation Details
AZURE_OPENAI_*,AZURE_INFERENCE_*,AZURE_AI_*)api-versionconfigurationConfidence Score: 4/5
Last reviewed commit: aa427e2
(2/5) Greptile learns from your feedback when you react with thumbs up/down!