Skip to content

Feat/azure ai provider#25758

Closed
ghostrider0470 wants to merge 5 commits intoopenclaw:mainfrom
ghostrider0470:feat/azure-ai-provider
Closed

Feat/azure ai provider#25758
ghostrider0470 wants to merge 5 commits intoopenclaw:mainfrom
ghostrider0470:feat/azure-ai-provider

Conversation

@ghostrider0470
Copy link
Copy Markdown

@ghostrider0470 ghostrider0470 commented Feb 24, 2026

Summary

  • Problem: OpenClaw has no support for Azure AI Foundry / Azure AI Inference models,
    leaving users who rely on Azure's model hosting unable to use the platform.
  • Why it matters: Azure AI Foundry provides access to GPT-4o, o3/o4-mini, DeepSeek-R1,
    Phi-4, Llama, Mistral, and Cohere models via a unified inference endpoint — a significant
    deployment target for enterprise users.
  • What changed: Added a new azure-foundry provider supporting chat completions (LLM),
    speech-to-text (STT via Whisper/transcribe models), text-to-speech (TTS), and dynamic
    model discovery from the /models API. Includes 12-model built-in catalog, Zod config
    schema, provider auto-detection from AZURE_FOUNDRY_API_KEY, and query parameter support
    for Azure's api-version requirement.
  • What did NOT change (scope boundary): No changes to existing provider behavior. No
    modifications to the gateway, agent orchestration, UI, or any non-Azure code paths.
    OpenAI audio provider received only a minor additive change (query param passthrough
    support).

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

  • Closes #
  • Related #

User-visible / Behavior Changes

  • New provider azure-foundry available for chat, STT, and TTS
  • New env vars: AZURE_FOUNDRY_API_KEY, AZURE_FOUNDRY_ENDPOINT (defaults to
    https://models.inference.ai.azure.com)
  • New config section models.azureFoundryDiscovery for dynamic model listing with optional
    provider filtering and cache refresh interval
  • TTS config gains azure provider option with apiKey, endpoint, model, voice fields
  • Auto-detection: if AZURE_FOUNDRY_API_KEY is set, Azure Foundry models are automatically
    discovered and available
  • Provider aliases accepted: azure, azureai, azure-ai, azure-ai-foundry, azure-foundry
    all resolve to azure-foundry

Security Impact (required)

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? Yes — new API key env var AZURE_FOUNDRY_API_KEY
  • New/changed network calls? Yes — calls to Azure AI endpoints for chat, STT, TTS, and
    model discovery
  • Command/tool execution surface changed? No
  • Data access scope changed? No
  • Explanation: API keys are handled identically to existing providers (env var, config,
    auth profiles). All network calls use fetchWithTimeoutGuarded with SSRF guards. STT/TTS
    use api-key header auth (Azure convention); chat uses Authorization: Bearer. Error
    responses are truncated to 300 chars to prevent leaking large payloads. No API keys are
    placed in URLs.

Repro + Verification

Environment

  • OS: Any (tested on WSL2/Linux)
  • Runtime/container: Node.js v25+
  • Model/provider: Azure AI Foundry (models.inference.ai.azure.com)
  • Integration/channel: Any channel supporting LLM / audio
  • Relevant config:
    models:
    azureFoundryDiscovery:
    enabled: true

env: AZURE_FOUNDRY_API_KEY=

Steps

  1. Set AZURE_FOUNDRY_API_KEY env var with a valid Azure AI key
  2. Start OpenClaw — Azure Foundry models auto-discover
  3. Send a message to trigger LLM response via an Azure model (e.g., gpt-4o)
  4. Send a voice message to test STT transcription
  5. Configure TTS with provider: azure and test text-to-speech output

Expected

  • Models from Azure Foundry endpoint appear in model list
  • Chat completions work via /openai/v1/chat/completions
  • Audio transcription works via Azure's /openai/deployments/{model}/audio/transcriptions
  • TTS generates audio via Azure's speech endpoint

Actual

  • (To be verified with live Azure credentials)

Evidence

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording
  • Perf numbers (if relevant)

Test results: 15 tests passing across 3 test suites:

  • azure-foundry-discovery.test.ts — 8 tests (discovery, caching, filtering, error
    handling)
  • azure-foundry-models.test.ts — 4 tests (catalog validation, model builder)
  • azure/audio.test.ts — 3 tests (STT auth headers, URL construction, api-version
    handling)

Human Verification (required)

  • Verified scenarios: TypeScript compilation clean, all 15 Azure tests pass, lint passes
    (0 errors), media/store tests pass (19 tests)
  • Edge cases checked: Deployment-scoped vs bare endpoint URL construction, api-version
    override vs default, api-key header auth for cognitive services endpoints, model
    discovery cache expiration, provider filter matching
  • What you did NOT verify: Live Azure API calls (requires valid credentials), TTS audio
    playback, full end-to-end gateway flow

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? Yes — new optional env vars (AZURE_FOUNDRY_API_KEY,
    AZURE_FOUNDRY_ENDPOINT) and config section (azureFoundryDiscovery). No existing config is
    affected.
  • Migration needed? No

Failure Recovery (if this breaks)

  • How to disable/revert: Remove AZURE_FOUNDRY_API_KEY env var. The provider
    auto-detection will skip Azure Foundry entirely.
  • Files/config to restore: No config changes needed — the provider is purely additive and
    opt-in via API key.
  • Known bad symptoms: If Azure endpoint is unreachable, discovery logs one warning then
    returns empty array (silent degradation). Chat errors surface as Azure Foundry chat
    completion failed (HTTP xxx).

Risks and Mitigations

  • Risk: Azure API response format changes could break model discovery parsing.
    • Mitigation: Discovery returns empty array on failure (graceful degradation), built-in
      12-model catalog serves as fallback, and errors are logged via subsystem logger.
  • Risk: Env var rename from AZURE_AI_* to AZURE_FOUNDRY_* could break existing users who
    adopted early builds.
    • Mitigation: This is the initial PR — no users have the old env vars in production.
      The model-selection.ts normalizer maps all old provider ID variants to azure-foundry.

Greptile Summary

This PR adds comprehensive Azure AI Foundry provider support to OpenClaw, including chat completions (LLM), speech-to-text (STT), and text-to-speech (TTS) capabilities. The implementation follows established patterns from existing providers.

Key Changes

  • New azure-foundry provider with chat completions support via OpenAI-compatible endpoint
  • Dynamic model discovery from Azure AI Foundry /models API with caching and filtering
  • 12-model built-in catalog as fallback (GPT-4o, o3/o4-mini, DeepSeek-R1, Phi-4, Llama, Mistral, Cohere)
  • STT integration via Azure OpenAI Whisper endpoints with api-key header auth
  • TTS integration using Azure speech endpoints
  • Provider alias normalization (azure, azureai, azure-ai, azure-ai-foundryazure-foundry)
  • Comprehensive test coverage (15 tests across 3 test suites)

Notable Implementation Details

  • Env var aliases accepted for backward compatibility (AZURE_OPENAI_*, AZURE_INFERENCE_*, AZURE_AI_*)
  • Discovery includes cache with configurable refresh interval (default 1 hour)
  • Provider filter support for selective model discovery
  • Query parameter passthrough for api-version configuration
  • Error response truncation (300 chars) to prevent log pollution

Confidence Score: 4/5

  • This PR is safe to merge with minor observations - the implementation is well-tested and follows existing patterns
  • The code is well-structured with comprehensive test coverage and follows established OpenClaw patterns. The PR description claim about SSRF guards is inaccurate (raw fetch is used, not fetchWithSsrfGuard), but this aligns with OpenClaw's trust model where operators configure endpoints. One minor observation: the PR description states this is "initial PR" but references env var renames from AZURE_AI_* to AZURE_FOUNDRY_* - however, the code actually accepts both as aliases, which is correct. The implementation is additive and opt-in via API key.
  • No files require special attention - the implementation is consistent and well-tested

Last reviewed commit: aa427e2

(2/5) Greptile learns from your feedback when you react with thumbs up/down!

@openclaw-barnacle openclaw-barnacle bot added agents Agent runtime and tooling size: XL docs Improvements or additions to documentation labels Feb 24, 2026
@ghostrider0470 ghostrider0470 marked this pull request as ready for review February 24, 2026 20:28
@openclaw-barnacle
Copy link
Copy Markdown

This pull request has been automatically marked as stale due to inactivity.
Please add updates or it will be closed.

@openclaw-barnacle openclaw-barnacle bot added stale Marked as stale due to inactivity channel: discord Channel integration: discord labels Mar 3, 2026
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch from d832ac4 to d897c72 Compare March 9, 2026 23:02
@ghostrider0470 ghostrider0470 marked this pull request as draft March 9, 2026 23:04
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch from d897c72 to 9b0e1b6 Compare March 9, 2026 23:15
@openclaw-barnacle openclaw-barnacle bot removed the channel: discord Channel integration: discord label Mar 9, 2026
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch 2 times, most recently from 497403d to a5b3691 Compare March 9, 2026 23:21
@ghostrider0470 ghostrider0470 marked this pull request as ready for review March 9, 2026 23:28
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 9, 2026

Greptile Summary

This PR adds a new azure-foundry provider to OpenClaw, supporting LLM chat completions, audio transcription (STT), and text-to-speech (TTS) via Azure AI Foundry / Azure OpenAI endpoints. It follows existing provider patterns (implicit discovery, auth-profile lookup, Zod config schema) and is fully additive — no existing code paths are affected. The implementation is generally solid, but there are a few issues to address before merging:

  • Runtime crash risk: src/media-understanding/providers/azure/audio.ts calls path.basename(params.fileName) as a fallback when params.fileName is falsy. If params.fileName is undefined, this throws a TypeError at runtime.
  • Missing timeout / SSRF protection in client.ts: azureFoundryChatCompletion uses bare fetch with no timeout and no SSRF guard. The PR description claims these protections are applied to all network calls, but this function is the exception. A slow or attacker-controlled endpoint could hang the process.
  • Debug console.warn calls in production hot paths: Several console.warn diagnostic statements were added in src/media-understanding/apply.ts and src/media-understanding/runner.ts that fire unconditionally on every media-understanding request. These should be removed or routed through the subsystem logger.
  • Duplicated type definition: AzureFoundryDiscoveryConfig is defined independently in both src/agents/azure-foundry-discovery.ts and src/config/types.models.ts — the discovery module should import the canonical type.

Confidence Score: 3/5

  • Safe to merge after fixing the path.basename(undefined) crash, the missing timeout in client.ts, and removing debug console.warn calls from production paths.
  • The PR is well-scoped, additive, and broadly follows existing provider patterns. However, there is a real runtime crash path (path.basename(undefined)) in the Azure STT provider, the chat client bypasses both SSRF protection and timeout guards (contradicting the PR's own security claims), and multiple debug console.warn statements will pollute production logs. These are not blocking architecture issues but do need to be fixed before production use.
  • src/providers/azure-foundry/client.ts (no timeout, no SSRF guard) and src/media-understanding/providers/azure/audio.ts (potential TypeError on undefined fileName).

Comments Outside Diff (4)

  1. src/media-understanding/providers/azure/audio.ts, line 1688 (link)

    path.basename called on potentially-undefined fileName

    When params.fileName?.trim() is falsy (either an empty string or undefined), the fallback calls path.basename(params.fileName). If params.fileName is undefined, this will throw a TypeError at runtime because path.basename requires a string argument — it does not accept undefined.

  2. src/providers/azure-foundry/client.ts, line 2118-2126 (link)

    Raw fetch used — no SSRF protection and no request timeout

    azureFoundryChatCompletion calls fetch directly. The PR description states "All network calls use fetchWithTimeoutGuarded with SSRF guards," but this function bypasses both. There is no timeout at all (an unresponsive Azure endpoint will hang the call indefinitely), and the SSRF guard that other providers apply to user-supplied endpoints is absent. The opts parameter already accepts maxTokens and temperature — consider adding a timeoutMs param and switching to the project's guarded fetch helper (or at minimum using an AbortController with a default timeout, as azureTTS in tts-core.ts does).

  3. src/media-understanding/apply.ts, line 1478-1482 (link)

    Debug console.warn calls left in production paths

    This console.warn is a diagnostic statement that will fire for every media-understanding request in production. The same pattern was added in multiple places in src/media-understanding/runner.ts (lines ~1847, ~1857, ~1865, ~1876, ~1884). Both files already import and use a subsystem logger (e.g. createSubsystemLogger); these statements should either be removed or converted to log.debug/logVerbose calls so they respect the configured verbosity level and do not pollute production console output.

  4. src/agents/azure-foundry-discovery.ts, line 573 (link)

    AzureFoundryDiscoveryConfig type duplicated across two modules

    AzureFoundryDiscoveryConfig is defined here (line 550–557) and again in src/config/types.models.ts (line 1364–1371) with identical fields. The discovery module should import the canonical type from types.models.ts to keep a single source of truth and avoid the two drifting over time.

    Then remove the local AzureFoundryDiscoveryConfig definition in this file.

Last reviewed commit: a5b3691

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a5b3691699

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/models-config.providers.ts Outdated
Comment thread src/agents/models-config.providers.ts Outdated
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch 2 times, most recently from 32473ac to 400859e Compare March 9, 2026 23:37
@ghostrider0470
Copy link
Copy Markdown
Author

All four issues addressed in latest force-push:

  1. path.basename(undefined) crash — Removed the dead fallback; uses "audio" default directly.
  2. Missing timeout/SSRF in client.ts — Added AbortSignal.timeout(60s) and Azure host allowlist validation (assertAzureEndpoint).
  3. Debug console.warn in production paths — Replaced all 6 occurrences with logVerbose behind shouldLogVerbose() gate.
  4. Duplicate AzureFoundryDiscoveryConfig type — Discovery module now re-exports from config/types.models.ts.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7a7a79644b

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/models-config.providers.ts
Comment thread src/agents/azure-foundry-discovery.ts
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch 2 times, most recently from 3fd9eb7 to 408ce99 Compare March 9, 2026 23:52
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c1d8d0e372

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/media-understanding/providers/azure/audio.ts Outdated
Comment thread src/agents/azure-foundry-discovery.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9fe4038aa9

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/models-config.providers.ts Outdated
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch from 9fe4038 to 0138b7b Compare March 10, 2026 00:26
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0138b7bff2

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/model-auth.ts Outdated
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch 3 times, most recently from af19638 to 8bb14bc Compare March 10, 2026 00:41
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8bb14bc93b

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/models-config.providers.ts Outdated
@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch 4 times, most recently from fdd130c to d8ed5fb Compare March 10, 2026 00:57
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d8ed5fb136

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/config/zod-schema.core.ts Outdated
@@ -409,6 +423,16 @@ export const TtsConfigSchema = z
})
.strict()
.optional(),
azure: z
.object({
apiKey: z.string().optional().register(sensitive),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Allow SecretInput for Azure TTS API keys

The new Azure TTS config only accepts apiKey as a plain string here, while other TTS providers use SecretInputSchema; this means object-style secret refs (for example env/file/exec secret refs) are rejected for messages.tts.azure.apiKey even though the rest of TTS supports them. In setups that keep provider keys in secret providers instead of plaintext config/env, Azure TTS cannot be configured and falls back or fails unexpectedly.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — changed apiKey from z.string() to SecretInputSchema so object-style secret refs (env/file/exec) are accepted for Azure TTS, consistent with other TTS providers.

Comment on lines +680 to +681
const openaiBaseUrl = `${trimmedEndpoint}/openai/v1`;
const anthropicBaseUrl = `${trimmedEndpoint}/anthropic`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Normalize Azure endpoint before composing OpenAI model URLs

This composes openaiBaseUrl by unconditionally appending /openai/v1, so if the configured endpoint already uses an OpenAI-compatible suffix (for example https://.../openai/v1), model base URLs become .../openai/v1/openai/v1 and chat requests 404. The same endpoint pattern is accepted elsewhere (e.g. Azure audio URL normalization), so this path should strip /openai/v1//v1 before building model URLs.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — buildAzureFoundryProvider now strips /openai/v1 and /v\d+ suffixes before composing model URLs, matching the same normalization applied in buildTranscriptionUrl for audio.

@ghostrider0470 ghostrider0470 force-pushed the feat/azure-ai-provider branch from d8ed5fb to 1f4452d Compare March 10, 2026 07:39
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1f4452d642

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/tts/tts.ts Outdated
voice: raw.openai?.voice ?? DEFAULT_OPENAI_VOICE,
},
azure: {
apiKey: raw.azure?.apiKey,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Resolve Azure TTS secret refs before using apiKey

messages.tts.azure.apiKey is validated as SecretInput, but this assignment keeps the raw value instead of resolving it like the OpenAI/ElevenLabs branches do. If users configure an env/file/exec secret ref object, resolveTtsApiKey returns that object and azureTTS sends an api-key header with a non-secret string representation (for example [object Object]), which causes Azure auth to fail for that configuration path.

Useful? React with 👍 / 👎.

Comment thread src/tts/tts-core.ts
Comment on lines +679 to +681
const baseUrl = endpoint.replace(/\/+$/, "");
const encodedVersion = encodeURIComponent(apiVersion);
const url = `${baseUrl}/openai/deployments/${encodeURIComponent(model)}/audio/speech?api-version=${encodedVersion}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Normalize Azure TTS endpoint before building deployment URL

This URL builder only strips trailing slashes before appending /openai/deployments/..., so an OpenAI-compatible endpoint like https://.../openai/v1 produces .../openai/v1/openai/deployments/... and 404s. The same suffix-normalization concern is already handled in other Azure paths, so this path should also strip /openai/v1 (or /v1) before composing the deployment URL.

Useful? React with 👍 / 👎.

Comment on lines +911 to +914
const endpoint =
discoveryConfig?.endpoint?.trim() ||
resolveAzureFoundryEndpointEnv(env)?.value ||
"https://models.inference.ai.azure.com";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Strip OpenAI suffix before Azure discovery endpoint reuse

Fresh evidence: endpoint normalization was added for the static Azure provider builder, but the discovery resolver still reuses the raw endpoint. If azureFoundryDiscovery.endpoint (or env alias) is set to an OpenAI-compatible URL ending in /openai/v1, discovery requests hit .../openai/v1/models and discovered model defaults are later built from the same suffixed base, so discovery can silently return no models or generate broken model base URLs.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 791796a2ab

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/config/types.tts.ts Outdated
};
/** Azure AI Foundry TTS configuration. */
azure?: {
apiKey?: string;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Use SecretInput for Azure TTS apiKey typing

messages.tts.azure.apiKey is typed as string here, but the runtime schema now accepts SecretInput (env/file/exec refs), so TypeScript config users cannot supply object-style secret refs without type errors even though runtime validation supports them. This creates a contract mismatch between OpenClawConfig typing and actual config behavior, and it blocks non-plaintext Azure TTS credentials in typed configs.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 09cdaba6b4

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/tts/tts.ts
return config.openai.apiKey || process.env.OPENAI_API_KEY;
}
if (provider === "azure") {
return config.azure.apiKey || resolveAzureFoundryApiKeyEnv(process.env)?.value;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Normalize Azure TTS apiKey before env fallback

resolveTtsApiKey returns config.azure.apiKey verbatim, so a whitespace-only configured value (for example from an empty template substitution) is treated as a valid key and prevents fallback to AZURE_FOUNDRY_API_KEY/aliases. In that scenario Azure TTS requests are sent with an invalid api-key header and fail authentication even though a valid env key is present. This branch should trim/normalize the configured key before deciding whether to use env fallback.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b1b4925347

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +179 to +183
const response = await fetchFn(url, {
method: "GET",
headers: {
"api-key": params.apiKey,
Accept: "application/json",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Add request timeout to Azure discovery fetch

discoverAzureFoundryModels issues the /models call without any timeout or abort signal, so a stalled Azure endpoint (for example, dropped packets or a hanging proxy) can block implicit provider resolution for an unbounded time during startup/model-config generation. Other discovery paths in this repo use bounded AbortSignal.timeout(...), so this path should also enforce a timeout to avoid long hangs.

Useful? React with 👍 / 👎.

Comment on lines +942 to +943
if (models.length === 0) {
return null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve empty Azure discovery results

Returning null when discovery yields zero models causes the caller to fall back to the preloaded static Azure catalog, which means an explicit providerFilter (or any valid zero-result discovery state) is silently ignored and unavailable models are reintroduced. This should return an explicit Azure provider result (with the discovered model list, even if empty) so discovery output is respected.

Useful? React with 👍 / 👎.

@ghostrider0470
Copy link
Copy Markdown
Author

@steipete I make everything clean for merge but by the time someone looks at it, its already too late merge conflicts appear. This project is amazing and I would like to support it with my azure cloud expertise.

@openclaw-barnacle openclaw-barnacle bot removed the stale Marked as stale due to inactivity label Mar 23, 2026
@openclaw-barnacle
Copy link
Copy Markdown

This pull request has been automatically marked as stale due to inactivity.
Please add updates or it will be closed.

@openclaw-barnacle openclaw-barnacle bot added the stale Marked as stale due to inactivity label Mar 29, 2026
@BradGroux
Copy link
Copy Markdown
Member

Closing this stale Microsoft-tracker item for cleanup. If this is still an issue or still worth pursuing, please re-open it. We now have dedicated Microsoft maintainers watching this area.

@BradGroux BradGroux closed this Mar 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling docs Improvements or additions to documentation size: XL stale Marked as stale due to inactivity

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants