Generate real-time video avatars for your Pipecat AI agents with Anam.
Maintainer: Anam (@anam-org)
pip install pipecat-anamOr with uv:
uv add pipecat-anamYou'll also need Pipecat with the services you use (STT, TTS, LLM, transport). For the example:
uv sync --extra dev --extra exampleThis installs the example's Pipecat service and transport extras in one shot (deepgram, cartesia, google, daily, runner, webrtc) plus local dev tooling.
Or with pip:
pip install -e ".[dev,example]"If you are building your own pipeline, install only the Pipecat extras you need.
- Anam API key
- API keys for STT, TTS, and LLM (e.g., Deepgram, Cartesia, Google)
- Daily.co API key for WebRTC transport (optional)
The AnamVideoService wraps around Anam's Python SDK for a seamless integration with Pipecat to create conversational AI applications where an Anam avatar provides synchronized video and audio output while your application handles the conversation logic. The AnamVideoService iterates over the (decoded) audio and video frames from Anam and passes them to the next service in the pipeline.
enable_audio_passthrough=True bypasses Anam's orchestration layer and renders the avatar directly from TTS audio.
enable_session_replay=False disables session recording on Anam's backend.
from anam import PersonaConfig
from pipecat_anam import AnamVideoService
persona_config = PersonaConfig(
avatar_id="your-avatar-id",
enable_audio_passthrough=True,
)
anam = AnamVideoService(
api_key=os.environ["ANAM_API_KEY"],
persona_config=persona_config,
api_base_url="https://api.anam.ai",
api_version="v1",
)
pipeline = Pipeline([
transport.input(),
stt,
context_aggregator.user(),
llm,
tts,
anam, # Video avatar (returns synchronized audio/video)
transport.output(),
context_aggregator.assistant(),
])See example.py for a complete working example.
- Install dependencies:
uv sync --extra dev --extra example- Set up your environment:
cp env.example .env
# Edit .env with your API keys- Run:
uv run python example.py -t dailyOr with the built-in WebRTC transport:
uv run python example.py -t webrtcThe bot will create a room (or use the built-in client) with a video avatar that responds to your voice.
- Tested with Pipecat v0.0.100+
- Python 3.10+
- Daily transport or built-in WebRTC transport
BSD-2-Clause - see LICENSE
- Anam Lab (Build and test your persona and get your avatar_id.)
- Anam Documentation (API reference and SDK documentation)
- Anam Community Slack
- Pipecat Discord (
#community-integrations)