Skip to content

anam-org/pipecat-anam

Repository files navigation

Pipecat Anam Integration

PyPI - Version

Generate real-time video avatars for your Pipecat AI agents with Anam.

Maintainer: Anam (@anam-org)

Installation

pip install pipecat-anam

Or with uv:

uv add pipecat-anam

You'll also need Pipecat with the services you use (STT, TTS, LLM, transport). For the example:

uv sync --extra dev --extra example

This installs the example's Pipecat service and transport extras in one shot (deepgram, cartesia, google, daily, runner, webrtc) plus local dev tooling.

Or with pip:

pip install -e ".[dev,example]"

If you are building your own pipeline, install only the Pipecat extras you need.

Prerequisites

  • Anam API key
  • API keys for STT, TTS, and LLM (e.g., Deepgram, Cartesia, Google)
  • Daily.co API key for WebRTC transport (optional)

Usage with Pipecat Pipeline

The AnamVideoService wraps around Anam's Python SDK for a seamless integration with Pipecat to create conversational AI applications where an Anam avatar provides synchronized video and audio output while your application handles the conversation logic. The AnamVideoService iterates over the (decoded) audio and video frames from Anam and passes them to the next service in the pipeline.

enable_audio_passthrough=True bypasses Anam's orchestration layer and renders the avatar directly from TTS audio.

enable_session_replay=False disables session recording on Anam's backend.

from anam import PersonaConfig
from pipecat_anam import AnamVideoService

persona_config = PersonaConfig(
    avatar_id="your-avatar-id",
    enable_audio_passthrough=True,
)

anam = AnamVideoService(
    api_key=os.environ["ANAM_API_KEY"],
    persona_config=persona_config,
    api_base_url="https://api.anam.ai",
    api_version="v1",
)

pipeline = Pipeline([
    transport.input(),
    stt,
    context_aggregator.user(),
    llm,
    tts,
    anam,  # Video avatar (returns synchronized audio/video)
    transport.output(),
    context_aggregator.assistant(),
])

See example.py for a complete working example.

Running the Example

  1. Install dependencies:
uv sync --extra dev --extra example
  1. Set up your environment:
cp env.example .env
# Edit .env with your API keys
  1. Run:
uv run python example.py -t daily

Or with the built-in WebRTC transport:

uv run python example.py -t webrtc

The bot will create a room (or use the built-in client) with a video avatar that responds to your voice.

Compatibility

  • Tested with Pipecat v0.0.100+
  • Python 3.10+
  • Daily transport or built-in WebRTC transport

License

BSD-2-Clause - see LICENSE

Support

About

Anam video avatar service for Pipecat

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages