Official Murf AI Text-to-Speech integration for Pipecat - a framework for building voice and multimodal conversational AI applications.
- Pipecat Murf TTS
Note: This integration is maintained by Murf AI. As the official provider of the TTS service, we are committed to actively maintaining and updating this integration.
Tested with Pipecat v0.0.106
This integration has been tested with Pipecat version 0.0.106. For compatibility with other versions, please refer to the Pipecat changelog.
- ποΈ High-Quality Voice Synthesis: Leverage Murf's advanced TTS technology
- π Real-time Streaming: WebSocket-based streaming for low-latency audio generation
- π¨ Voice Customization: Control voice style, rate, pitch, and variation
- π Multi-Language Support: Support for multiple languages and locales
- π§ Flexible Configuration: Comprehensive audio format and quality options
- π Metrics Support: Built-in performance tracking and monitoring
pip install pipecat-murf-ttsuv add pipecat-murf-ttsgit clone https://github.com/murf-ai/pipecat-murf-tts.git
cd pipecat-murf-tts
pip install -e .Sign up at Murf AI and obtain your API key from the dashboard.
import asyncio
from pipecat_murf_tts import MurfTTSService
async def main():
# Initialize the TTS service
tts = MurfTTSService(
api_key="your-murf-api-key",
params=MurfTTSService.InputParams(
voice_id="Matthew",
style="Conversational",
rate=0,
pitch=0,
sample_rate=44100,
format="PCM",
),
)
# Use in your Pipecat pipeline
# ... (see examples below)
if __name__ == "__main__":
asyncio.run(main())import asyncio
import os
from dotenv import load_dotenv
from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.runner import PipelineRunner
from pipecat.pipeline.task import PipelineTask, PipelineParams
from pipecat.services.openai.llm import OpenAILLMService
from pipecat.processors.aggregators.llm_context import LLMContext
from pipecat.processors.aggregators.llm_response_universal import (
LLMContextAggregatorPair,
)
from pipecat_murf_tts import MurfTTSService
load_dotenv()
async def main():
# Initialize Murf TTS
tts = MurfTTSService(
api_key=os.getenv("MURF_API_KEY"),
params=MurfTTSService.InputParams(
voice_id="Matthew",
style="Conversational",
),
)
# Initialize LLM
llm = OpenAILLMService(api_key=os.getenv("OPENAI_API_KEY"))
# Set up context and pipeline
messages = [
{"role": "system", "content": "You are a helpful assistant."},
]
context = LLMContext(messages)
context_aggregator = LLMContextAggregatorPair(context)
# Create pipeline
pipeline = Pipeline([
context_aggregator.user(),
llm,
tts,
context_aggregator.assistant(),
])
# Run pipeline
task = PipelineTask(pipeline)
runner = PipelineRunner()
await runner.run(task)
if __name__ == "__main__":
asyncio.run(main())The MurfTTSService.InputParams class provides extensive configuration options:
| Parameter | Type | Default | Range/Options | Description |
|---|---|---|---|---|
voice_id |
str |
"Matthew" |
Any valid Murf voice ID | Voice identifier for TTS synthesis |
style |
str |
"Conversational" |
Voice-specific styles | Voice style (e.g., "Conversational", "Narration") |
rate |
int |
0 |
-50 to 50 |
Speech rate adjustment |
pitch |
int |
0 |
-50 to 50 |
Pitch adjustment |
variation |
int |
1 |
0 to 5 |
Variation in pause, pitch, and speed (Gen2 only) |
model |
str |
"FALCON" |
"FALCON", "GEN2" |
The model to use for audio output |
sample_rate |
int |
44100 |
8000, 16000, 24000, 44100, 48000 |
Audio sample rate in Hz |
channel_type |
str |
"MONO" |
"MONO", "STEREO" |
Audio channel configuration |
format |
str |
"PCM" |
"MP3", "WAV", "FLAC", "ALAW", "ULAW", "PCM", "OGG" |
Audio output format |
multi_native_locale |
str |
None |
Language codes (e.g., "en-US") |
Language for Gen2 model audio |
pronunciation_dictionary |
dict |
None |
Custom pronunciation mappings | Dictionary for custom word pronunciations |
from pipecat_murf_tts import MurfTTSService
tts = MurfTTSService(
api_key="your-api-key",
params=MurfTTSService.InputParams(
voice_id="en-US-natalie",
style="Narration",
rate=10, # Slightly faster
pitch=-5, # Slightly lower pitch
variation=3, # More variation
sample_rate=48000, # Higher quality
channel_type="STEREO",
format="WAV",
multi_native_locale="en-US",
pronunciation_dictionary={
"Pipecat": {"pronunciation": "pipe-cat"},
},
),
)Murf AI offers a wide variety of voices across different languages and styles. Visit the Murf AI Voice Library to explore available voices.
Common voice IDs include:
en-US-natalie- American English, femaleen-UK-ruby- British English, femaleen-US-amara- American English, female- And many more...
Create a .env file in your project root:
MURF_API_KEY=your_murf_api_key_here
OPENAI_API_KEY=your_openai_key_here # If using with LLM
DEEPGRAM_API_KEY=your_deepgram_key_here # If using with STTCheck out the examples directory for complete working examples:
- murf_tts_basic.py - Full pipeline with STT, LLM, and TTS
To run the example:
# Install example dependencies
uv add pipecat-ai[deepgram,openai,silero]
# Set up your .env file with API keys
# Then run
python examples/foundational/murf_tts_basic.py# Change voice on the fly (async in pipecat >= 0.0.106)
await tts.set_voice("en-US-natalie")The service includes built-in error handling and automatic reconnection:
tts = MurfTTSService(
api_key="your-api-key",
params=MurfTTSService.InputParams(voice_id="Matthew"),
)
# Automatic reconnection on connection loss
# Built-in context management for interruptions- Python >= 3.10, < 3.13
- pipecat-ai >= 0.0.106, <= 0.1.0
- websockets >= 15.0.1, < 16.0
- loguru >= 0.7.3
- python-dotenv >= 1.1.1
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- π§ Email: [email protected]
- π Website: murf.ai
- π Documentation: Murf API Documentation
- π Issues: GitHub Issues
