Skip to content

feat(tts): add Azure Speech TTS provider#51776

Open
leonchui wants to merge 4 commits intoopenclaw:mainfrom
leonchui:feature/azure-tts-clean
Open

feat(tts): add Azure Speech TTS provider#51776
leonchui wants to merge 4 commits intoopenclaw:mainfrom
leonchui:feature/azure-tts-clean

Conversation

@leonchui
Copy link
Copy Markdown

@leonchui leonchui commented Mar 21, 2026

Summary

Add Azure Speech TTS provider to OpenClaw with SSML synthesis support.

Problem

  • OpenClaw currently supports Edge TTS, ElevenLabs, and OpenAI TTS
  • Azure Speech has 400+ neural voices including Cantonese (zh-HK)
  • Many users already have Azure accounts

What Changed

  • Added azure.ts provider and azure.test.ts tests
  • Updated tts.ts, tts-core.ts, provider-registry.ts
  • Updated config types and zod schema

Features

  • SSML-based synthesis, 400+ neural voices
  • Cantonese voice: zh-HK-HiuMaanNeural
  • Config: apiKey, region, voice, lang, outputFormat
  • Environment: AZURE_SPEECH_API_KEY, AZURE_SPEECH_REGION

Related Issues

- Add Azure TTS provider with SSML synthesis
- Support for 400+ neural voices including Cantonese (zh-HK-HiuMaanNeural)
- Config: apiKey, region, voice, lang, outputFormat
- Environment variables: AZURE_SPEECH_API_KEY, AZURE_SPEECH_REGION
- Provider ID: 'azure' with alias 'azure-tts'
- Added azure to TTS_PROVIDERS and auto-selection
- Added azure_voice directive support in parseTtsDirectives
- Added tests for Azure TTS voice listing
- Fixed file extension mapping for non-MP3 formats
- Resolves issue openclaw#4021
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 21, 2026

Greptile Summary

This PR adds Azure Speech as a new TTS provider, supporting 400+ neural voices via the Azure Cognitive Services REST API with SSML synthesis, region/baseUrl configuration, and a new azure_voice directive.

Key issues found:

  • SSML injection (P1): buildAzureSSML escapes the text body but interpolates voice and lang directly into the XML template without escaping. Since voice is user-controllable via the azure_voice directive (validated only as non-empty), a crafted voice value like foo' evil='injected can inject arbitrary SSML attributes or elements into the outbound request.
  • Incorrect directive in error message (P1): When synthesis fails because no voice is configured, the error tells the user to use [[tts:voice=…]] — the OpenAI voice directive — rather than the correct Azure directive [[tts:azure_voice=…]].
  • Auto-selection / isConfigured mismatch (P1): getTtsProvider auto-selects azure as soon as AZURE_SPEECH_API_KEY is present, but the provider's isConfigured also requires a voice or lang. A user who sets only the API key will have azure silently auto-selected and then hit a hard synthesis failure.
  • No timeout on voice-listing fetch (P2): listAzureVoices has no AbortSignal.timeout, unlike the synthesize path, leaving voice-list requests unbounded.
  • Duplicate constant (P2): DEFAULT_AZURE_OUTPUT_FORMAT is defined identically in both azure.ts and tts.ts.

Confidence Score: 2/5

  • Not safe to merge as-is — the SSML injection and the misleading error + auto-selection mismatch need to be resolved first.
  • Two P1 functional/security issues in the core synthesis path: unescaped voice/lang in the SSML template enables injection attacks via directives, and the auto-selection logic picks azure based on API key alone while synthesis immediately fails without a configured voice. The misdirected error message compounds the UX problem. These are straightforward to fix but need to land before this provider is usable in production.
  • src/tts/providers/azure.ts (SSML escaping, isConfigured/error message); src/tts/tts.ts (auto-selection gate)
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 79

Comment:
**SSML injection via unescaped `voice` and `lang`**

`buildAzureSSML` escapes the user-provided `text` body correctly, but both `voice` and `lang` are interpolated directly into the XML template without any escaping.

The `voice` parameter is populated from the `azure_voice` directive override (`overrides.azure.voice`), which accepts any non-empty string. An attacker who can influence a TTS directive (e.g. via message content reaching `parseTtsDirectives`) could inject arbitrary SSML attributes or elements:

- Input: `azure_voice=foo' xml:lang='evil`
- Resulting SSML: `<voice name='foo' xml:lang='evil'>...</voice>`

Similarly `lang` (which uses single-quote delimiters in the `xml:lang` attribute) would be broken by any value containing a single quote.

At minimum both values should be single-quote-escaped before insertion; ideally a proper XML attribute escaper should be applied:

```suggestion
  return `<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='${escapeXmlAttr(lang || "en-US")}'><voice name='${escapeXmlAttr(voice)}'>${escapedText}</voice></speak>`;
```

Where `escapeXmlAttr` replaces at least `&`, `<`, `>`, `"`, and `'` (i.e. the same set applied to `escapedText`).

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 99-103

Comment:
**`isConfigured` check diverges from auto-selection logic in `getTtsProvider`**

`isConfigured` returns `false` when no `voice` or `lang` is configured (API key alone is not enough). However, `getTtsProvider` in `tts.ts` auto-selects `azure` as soon as `resolveTtsApiKey` finds an `AZURE_SPEECH_API_KEY` — it does **not** consult `isConfigured`.

The practical result: a user who sets only `AZURE_SPEECH_API_KEY` (no voice) will have azure auto-selected, which then hard-fails at `synthesize` time with:

> Azure voice not configured. Set voice in config or use [[tts:voice=…]] directive

The error message itself references `[[tts:voice=…]]` (the OpenAI voice directive) rather than the Azure-specific `[[tts:azure_voice=…]]`, adding to the confusion.

Consider either:
1. Aligning `getTtsProvider` to also require a configured voice before auto-selecting azure, or
2. Updating the error message to reference the correct directive:
```suggestion
          "Azure voice not configured. Set voice in config or use [[tts:azure_voice=zh-HK-HiuMaanNeural]] directive",
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 4

Comment:
**Duplicate constant across modules**

`DEFAULT_AZURE_OUTPUT_FORMAT` is defined identically in both `src/tts/providers/azure.ts` (line 4) and `src/tts/tts.ts`. If the default ever changes it must be updated in two places. Consider exporting it from one location (e.g. `azure.ts`) and importing it in `tts.ts`.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 45-49

Comment:
**No request timeout on `listAzureVoices`**

The `synthesize` path correctly uses `AbortSignal.timeout(timeoutMs)`, but the `fetch` call inside `listAzureVoices` has no timeout. A slow or unresponsive Azure endpoint could stall a voice-listing request indefinitely. Consider passing a timeout signal here as well:

```suggestion
  const response = await fetch(url, {
    headers: {
      "Ocp-Apim-Subscription-Key": params.apiKey,
    },
    signal: AbortSignal.timeout(params.timeoutMs ?? DEFAULT_TIMEOUT_MS),
  });
```

You would need to add an optional `timeoutMs` field to the params type accordingly.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: "feat(tts): add Azure..."

.replace(/"/g, "&quot;")
.replace(/'/g, "&apos;");

return `<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='${lang || "en-US"}'><voice name='${voice}'>${escapedText}</voice></speak>`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 SSML injection via unescaped voice and lang

buildAzureSSML escapes the user-provided text body correctly, but both voice and lang are interpolated directly into the XML template without any escaping.

The voice parameter is populated from the azure_voice directive override (overrides.azure.voice), which accepts any non-empty string. An attacker who can influence a TTS directive (e.g. via message content reaching parseTtsDirectives) could inject arbitrary SSML attributes or elements:

  • Input: azure_voice=foo' xml:lang='evil
  • Resulting SSML: <voice name='foo' xml:lang='evil'>...</voice>

Similarly lang (which uses single-quote delimiters in the xml:lang attribute) would be broken by any value containing a single quote.

At minimum both values should be single-quote-escaped before insertion; ideally a proper XML attribute escaper should be applied:

Suggested change
return `<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='${lang || "en-US"}'><voice name='${voice}'>${escapedText}</voice></speak>`;
return `<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='${escapeXmlAttr(lang || "en-US")}'><voice name='${escapeXmlAttr(voice)}'>${escapedText}</voice></speak>`;

Where escapeXmlAttr replaces at least &, <, >, ", and ' (i.e. the same set applied to escapedText).

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 79

Comment:
**SSML injection via unescaped `voice` and `lang`**

`buildAzureSSML` escapes the user-provided `text` body correctly, but both `voice` and `lang` are interpolated directly into the XML template without any escaping.

The `voice` parameter is populated from the `azure_voice` directive override (`overrides.azure.voice`), which accepts any non-empty string. An attacker who can influence a TTS directive (e.g. via message content reaching `parseTtsDirectives`) could inject arbitrary SSML attributes or elements:

- Input: `azure_voice=foo' xml:lang='evil`
- Resulting SSML: `<voice name='foo' xml:lang='evil'>...</voice>`

Similarly `lang` (which uses single-quote delimiters in the `xml:lang` attribute) would be broken by any value containing a single quote.

At minimum both values should be single-quote-escaped before insertion; ideally a proper XML attribute escaper should be applied:

```suggestion
  return `<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xml:lang='${escapeXmlAttr(lang || "en-US")}'><voice name='${escapeXmlAttr(voice)}'>${escapedText}</voice></speak>`;
```

Where `escapeXmlAttr` replaces at least `&`, `<`, `>`, `"`, and `'` (i.e. the same set applied to `escapedText`).

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +99 to +103
isConfigured: ({ config }) =>
Boolean(
((config as any)?.azure?.apiKey || process.env.AZURE_SPEECH_API_KEY) &&
((config as any)?.azure?.voice || (config as any)?.azure?.lang),
),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 isConfigured check diverges from auto-selection logic in getTtsProvider

isConfigured returns false when no voice or lang is configured (API key alone is not enough). However, getTtsProvider in tts.ts auto-selects azure as soon as resolveTtsApiKey finds an AZURE_SPEECH_API_KEY — it does not consult isConfigured.

The practical result: a user who sets only AZURE_SPEECH_API_KEY (no voice) will have azure auto-selected, which then hard-fails at synthesize time with:

Azure voice not configured. Set voice in config or use [[tts:voice=…]] directive

The error message itself references [[tts:voice=…]] (the OpenAI voice directive) rather than the Azure-specific [[tts:azure_voice=…]], adding to the confusion.

Consider either:

  1. Aligning getTtsProvider to also require a configured voice before auto-selecting azure, or
  2. Updating the error message to reference the correct directive:
Suggested change
isConfigured: ({ config }) =>
Boolean(
((config as any)?.azure?.apiKey || process.env.AZURE_SPEECH_API_KEY) &&
((config as any)?.azure?.voice || (config as any)?.azure?.lang),
),
"Azure voice not configured. Set voice in config or use [[tts:azure_voice=zh-HK-HiuMaanNeural]] directive",
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 99-103

Comment:
**`isConfigured` check diverges from auto-selection logic in `getTtsProvider`**

`isConfigured` returns `false` when no `voice` or `lang` is configured (API key alone is not enough). However, `getTtsProvider` in `tts.ts` auto-selects `azure` as soon as `resolveTtsApiKey` finds an `AZURE_SPEECH_API_KEY` — it does **not** consult `isConfigured`.

The practical result: a user who sets only `AZURE_SPEECH_API_KEY` (no voice) will have azure auto-selected, which then hard-fails at `synthesize` time with:

> Azure voice not configured. Set voice in config or use [[tts:voice=…]] directive

The error message itself references `[[tts:voice=…]]` (the OpenAI voice directive) rather than the Azure-specific `[[tts:azure_voice=…]]`, adding to the confusion.

Consider either:
1. Aligning `getTtsProvider` to also require a configured voice before auto-selecting azure, or
2. Updating the error message to reference the correct directive:
```suggestion
          "Azure voice not configured. Set voice in config or use [[tts:azure_voice=zh-HK-HiuMaanNeural]] directive",
```

How can I resolve this? If you propose a fix, please make it concise.

import type { SpeechProviderPlugin } from "../../plugins/types.js";
import type { SpeechVoiceOption } from "../provider-types.js";

const DEFAULT_AZURE_OUTPUT_FORMAT = "audio-24khz-48kbitrate-mono-mp3";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Duplicate constant across modules

DEFAULT_AZURE_OUTPUT_FORMAT is defined identically in both src/tts/providers/azure.ts (line 4) and src/tts/tts.ts. If the default ever changes it must be updated in two places. Consider exporting it from one location (e.g. azure.ts) and importing it in tts.ts.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 4

Comment:
**Duplicate constant across modules**

`DEFAULT_AZURE_OUTPUT_FORMAT` is defined identically in both `src/tts/providers/azure.ts` (line 4) and `src/tts/tts.ts`. If the default ever changes it must be updated in two places. Consider exporting it from one location (e.g. `azure.ts`) and importing it in `tts.ts`.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +45 to +49
const response = await fetch(url, {
headers: {
"Ocp-Apim-Subscription-Key": params.apiKey,
},
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 No request timeout on listAzureVoices

The synthesize path correctly uses AbortSignal.timeout(timeoutMs), but the fetch call inside listAzureVoices has no timeout. A slow or unresponsive Azure endpoint could stall a voice-listing request indefinitely. Consider passing a timeout signal here as well:

Suggested change
const response = await fetch(url, {
headers: {
"Ocp-Apim-Subscription-Key": params.apiKey,
},
});
const response = await fetch(url, {
headers: {
"Ocp-Apim-Subscription-Key": params.apiKey,
},
signal: AbortSignal.timeout(params.timeoutMs ?? DEFAULT_TIMEOUT_MS),
});

You would need to add an optional timeoutMs field to the params type accordingly.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/tts/providers/azure.ts
Line: 45-49

Comment:
**No request timeout on `listAzureVoices`**

The `synthesize` path correctly uses `AbortSignal.timeout(timeoutMs)`, but the `fetch` call inside `listAzureVoices` has no timeout. A slow or unresponsive Azure endpoint could stall a voice-listing request indefinitely. Consider passing a timeout signal here as well:

```suggestion
  const response = await fetch(url, {
    headers: {
      "Ocp-Apim-Subscription-Key": params.apiKey,
    },
    signal: AbortSignal.timeout(params.timeoutMs ?? DEFAULT_TIMEOUT_MS),
  });
```

You would need to add an optional `timeoutMs` field to the params type accordingly.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 996c529913

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +99 to +103
isConfigured: ({ config }) =>
Boolean(
((config as any)?.azure?.apiKey || process.env.AZURE_SPEECH_API_KEY) &&
((config as any)?.azure?.voice || (config as any)?.azure?.lang),
),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Require a configured Azure voice before advertising readiness

isConfigured() currently checks azure.voice || azure.lang, but resolveTtsConfig() always fills config.azure.lang with "en-US" (src/tts/tts.ts:345-355). In practice, any host with only AZURE_SPEECH_API_KEY set is now reported as Azure-ready, getTtsProvider() can auto-pick Azure as the primary provider (src/tts/tts.ts:503-510), and the first synthesis then fails with Azure voice not configured. That adds a guaranteed failure to every fallback path and hard-fails disableFallback callers until a voice is explicitly configured.

Useful? React with 👍 / 👎.

Comment on lines +93 to +96
return listAzureVoices({
apiKey,
region: (req.config as any)?.azure?.region || process.env.AZURE_SPEECH_REGION,
baseUrl: (req.config as any)?.azure?.baseUrl,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Thread req.baseUrl through Azure voice listing

listSpeechVoices() passes a caller-supplied baseUrl into every provider (src/tts/tts.ts:848-852), but this Azure adapter ignores it and only forwards config.azure.baseUrl. Any setup that uses a custom Azure endpoint (private link, sovereign cloud, proxy, etc.) can still synthesize with the custom URL, yet runtime.tts.listVoices({ baseUrl }) will query the default public endpoint instead and fail or return the wrong catalog.

Useful? React with 👍 / 👎.

Comment on lines +133 to +134
// Use timeout from config, directive, or default
const timeoutMs = (req.config as any)?.azure?.timeoutMs ?? DEFAULT_TIMEOUT_MS;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor the global TTS timeout for Azure requests

The top-level messages.tts.timeoutMs is the generic request timeout (src/config/types.tts.ts:110-111), and the existing providers all respect req.config.timeoutMs. This Azure implementation skips that fallback and jumps straight to DEFAULT_TIMEOUT_MS, so deployments that shorten the global timeout to keep auto-replies responsive will still wait ~30s on Azure unless they discover and duplicate a provider-specific override.

Useful? React with 👍 / 👎.

@leonchui
Copy link
Copy Markdown
Author

Hi @steipete! 👋

I've submitted this PR adding Azure Speech TTS provider with support for 400+ neural voices including Cantonese (zh-HK-HiuMaanNeural).

This resolves issue #4021. Would you have time to review? Happy to make any adjustments if needed.

Thank you for your time! 🙏

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7a67b05782

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

audioBuffer: Buffer.from(audioBuffer),
outputFormat,
fileExtension: getFileExtension(outputFormat),
voiceCompatible: true,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Respect target before marking Azure output voice-compatible

This provider always returns voiceCompatible: true and never branches on req.target, while its default output format is MP3. In voice-bubble channels, maybeApplyTtsToPayload uses this flag to force audioAsVoice, so Azure can be treated as voice-note-ready even when the synthesized format was not chosen for that path. This is inconsistent with the other providers (which gate voice-note compatibility on target/format) and can cause voice-message routing with incompatible audio.

Useful? React with 👍 / 👎.

if (outputFormat.includes("wav")) return ".wav";
if (outputFormat.includes("ogg")) return ".ogg";
if (outputFormat.includes("webm")) return ".webm";
return ".mp3"; // default to mp3
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Map Azure RIFF/PCM formats to non-MP3 file extensions

getFileExtension falls back to .mp3 for any format string that does not contain mp3, wav, ogg, or webm. Azure output formats like riff-24khz-16bit-mono-pcm and raw-16khz-16bit-mono-pcm therefore get written with an MP3 extension, which mismatches the actual bytes and leads downstream MIME inference/upload handling to be incorrect for those configured formats.

Useful? React with 👍 / 👎.

@leonchui
Copy link
Copy Markdown
Author

Hi @steipete! All conflicts have been resolved and the PR is now mergeable.

The security fix for SSML injection is included. Would you be able to merge this PR?

Thank you! 🙏

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 78c55dd089

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +10 to +14
const BUILTIN_SPEECH_PROVIDER_BUILDERS = [
buildOpenAISpeechProvider,
buildElevenLabsSpeechProvider,
buildMicrosoftSpeechProvider,
buildAzureSpeechProvider,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Register builtin speech providers before returning registry

The new BUILTIN_SPEECH_PROVIDER_BUILDERS list is never used, and buildProviderMaps still only registers providers returned by resolveSpeechProviderPluginEntries. Since there is no bundled plugin registering an azure speech provider, this commit’s Azure implementation remains unreachable (getSpeechProvider("azure") stays undefined), so provider selection and fallbacks can never actually use Azure.

Useful? React with 👍 / 👎.

Comment on lines +420 to +424
elevenlabs: z
.object({
apiKey: SecretInputSchema.optional().register(sensitive),
baseUrl: z.string().optional(),
voiceId: z.string().optional(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve messages.tts.providers in TTS config schema

This schema now hard-codes provider keys and drops the providers map while remaining strict, so existing messages.tts.providers.<id> configurations are rejected during config parsing. That regresses the migrated/documented shape used by plugin speech providers and also conflicts with buildTalkTtsConfig (which still writes tts.providers), causing valid TTS setups to fail validation or lose provider-specific settings.

Useful? React with 👍 / 👎.

Comment on lines +750 to +754
text: params.text,
cfg: params.cfg,
config,
target,
overrides: params.overrides,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Pass providerConfig/providerOverrides to speech providers

The synthesize request is built with legacy fields (config, overrides) instead of the current speech-provider contract (providerConfig, providerOverrides, timeoutMs). Registered providers (for example in extensions/openai/speech-provider.ts) read the newer fields, so this call path delivers undefined config/overrides and can throw or mark providers as unconfigured even when TTS is configured correctly.

Useful? React with 👍 / 👎.

@leonchui
Copy link
Copy Markdown
Author

Hi @steipete! The CI is failing because my PR assumes a different codebase structure than what exists on main.

The Issue:
My PR adds files in (azure.ts, elevenlabs.ts, microsoft.ts, openai.ts), but main has no folder. It appears to use a plugin system for TTS providers instead.

Question:
Could you please provide guidance on the correct architecture for adding a new TTS provider like Azure Speech? Is there documentation or an example I can follow?

I'd like to contribute this Azure TTS provider properly but need help understanding the expected structure.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Comprehensive Azure Provider Support Roadmap

1 participant