Skip to content

Latest commit

 

History

History

README.md

Pipecat Examples

This directory contains examples showing how to build voice and multimodal agents with Pipecat.

Setup

  1. Follow the README steps to get your local environment configured.

    Run from root directory: Make sure you are running the steps from the root directory.

    Using local audio?: The LocalAudioTransport requires a system dependency for portaudio. Install the dependency to use the transport.

  2. Copy the env.example file and add API keys for services you plan to use:

    cp env.example .env
    # Edit .env with your API keys
  3. Run any example:

    uv run python getting-started/01-say-one-thing.py
  4. Open the web interface at http://localhost:7860/client/ and click "Connect"

Running examples with other transports

Most examples support running with other transports, like Twilio or Daily.

Daily

You need to create a Daily account at https://dashboard.daily.co/u/signup. Once signed up, you can create your own room from the dashboard and set the environment variables DAILY_ROOM_URL and DAILY_API_KEY. Alternatively, you can let the example create a room for you (still needs DAILY_API_KEY environment variable). Then, start any example with -t daily:

uv run getting-started/06-voice-agent.py -t daily

Twilio

It is also possible to run the example through a Twilio phone number. You will need to setup a few things:

  1. Install and run ngrok.
ngrok http 7860
  1. Configure your Twilio phone number. One way is to setup a TwiML app and set the request URL to the ngrok URL from step (1). Then, set your phone number to use the new TwiML app.

Then, run the example with:

uv run getting-started/06-voice-agent.py -t twilio -x NGROK_HOST_NAME

Directory Structure

Progressive introduction to Pipecat, from minimal TTS to a full voice agent with function calling.

Full STT + LLM + TTS voice agent pipelines showcasing different speech service providers (Deepgram, ElevenLabs, Cartesia, etc.)

Function calling with different LLM providers (OpenAI, Anthropic, Google, etc.)

Speech-to-text examples with various STT providers.

Image description and vision capabilities with different multimodal LLMs.

Realtime and multimodal live APIs (OpenAI Realtime, Gemini Live, AWS Nova Sonic, Ultravox, Grok).

Maintaining conversation context across sessions with different providers.

Summarizing conversation context to manage token limits.

Changing service settings at runtime, organized by service type:

  • stt/ — Speech-to-text settings
  • tts/ — Text-to-speech settings
  • llm/ — LLM settings

Turn detection, interruption handling, and user input management.

LLM thinking/reasoning modes and MCP (Model Context Protocol) tool server integration.

Transport layer examples (WebRTC, Daily, LiveKit).

Video avatar integrations (Tavus, HeyGen, Simli, LemonSlice).

Video processing, mirroring, GStreamer, and custom video tracks.

Audio recording, background sounds, and sound effects.

Pipeline monitoring: observers, heartbeats, and Sentry metrics.

Retrieval-augmented generation, grounding, and long-term memory (Mem0, Gemini).

Miscellaneous features: wake phrases, live translation, service switching, voice switching, and more.

Advanced Usage

Customizing Network Settings

uv run python <example-name> --host 0.0.0.0 --port 8080

Troubleshooting

  • No audio/video: Check browser permissions for microphone and camera
  • Connection errors: Verify API keys in .env file
  • Port conflicts: Use --port to change the port

For more examples, visit the pipecat-examples repository.