This directory contains examples showing how to build voice and multimodal agents with Pipecat.
-
Follow the README steps to get your local environment configured.
Run from root directory: Make sure you are running the steps from the root directory.
Using local audio?: The
LocalAudioTransportrequires a system dependency forportaudio. Install the dependency to use the transport. -
Copy the
env.examplefile and add API keys for services you plan to use:cp env.example .env # Edit .env with your API keys -
Run any example:
uv run python getting-started/01-say-one-thing.py
-
Open the web interface at http://localhost:7860/client/ and click "Connect"
Most examples support running with other transports, like Twilio or Daily.
You need to create a Daily account at https://dashboard.daily.co/u/signup. Once signed up, you can create your own room from the dashboard and set the environment variables DAILY_ROOM_URL and DAILY_API_KEY. Alternatively, you can let the example create a room for you (still needs DAILY_API_KEY environment variable). Then, start any example with -t daily:
uv run getting-started/06-voice-agent.py -t dailyIt is also possible to run the example through a Twilio phone number. You will need to setup a few things:
- Install and run ngrok.
ngrok http 7860- Configure your Twilio phone number. One way is to setup a TwiML app and set the request URL to the ngrok URL from step (1). Then, set your phone number to use the new TwiML app.
Then, run the example with:
uv run getting-started/06-voice-agent.py -t twilio -x NGROK_HOST_NAMEProgressive introduction to Pipecat, from minimal TTS to a full voice agent with function calling.
Full STT + LLM + TTS voice agent pipelines showcasing different speech service providers (Deepgram, ElevenLabs, Cartesia, etc.)
Function calling with different LLM providers (OpenAI, Anthropic, Google, etc.)
Speech-to-text examples with various STT providers.
Image description and vision capabilities with different multimodal LLMs.
Realtime and multimodal live APIs (OpenAI Realtime, Gemini Live, AWS Nova Sonic, Ultravox, Grok).
Maintaining conversation context across sessions with different providers.
Summarizing conversation context to manage token limits.
Changing service settings at runtime, organized by service type:
Turn detection, interruption handling, and user input management.
LLM thinking/reasoning modes and MCP (Model Context Protocol) tool server integration.
Transport layer examples (WebRTC, Daily, LiveKit).
Video avatar integrations (Tavus, HeyGen, Simli, LemonSlice).
Video processing, mirroring, GStreamer, and custom video tracks.
Audio recording, background sounds, and sound effects.
Pipeline monitoring: observers, heartbeats, and Sentry metrics.
Retrieval-augmented generation, grounding, and long-term memory (Mem0, Gemini).
Miscellaneous features: wake phrases, live translation, service switching, voice switching, and more.
uv run python <example-name> --host 0.0.0.0 --port 8080- No audio/video: Check browser permissions for microphone and camera
- Connection errors: Verify API keys in
.envfile - Port conflicts: Use
--portto change the port
For more examples, visit the pipecat-examples repository.