Docker Compose for AI agents. Define multi-agent workflows as config files. Orchestrate from your terminal. Works with any agent tool.
# Install from source (PyPI coming soon)
pip install git+https://github.com/srijansk/agent-relay.git# Initialize a 4-agent workflow (Planner → Reviewer → Implementer → Auditor)
relay init --template plan-review-implement-audit
# See the current state
relay status
# Get the prompt for the next agent — copy into Cursor, Codex, Aider, or any tool
relay next
# After the agent finishes, advance the workflow
relay advance
# Or launch the TUI dashboard
relay dashThat's it. No API keys required. No backend configuration. Just relay next, copy the prompt, paste into your agent tool, and relay advance when done.
Agent Relay coordinates multiple AI agents through a file-based protocol:
- An agentic orchestrator holds product intent and decides how to run the orchestra
- The orchestrator gauges task complexity and blast radius to route work across roles
- You define a workflow (stages, roles, transitions) in
workflow.yml - A state machine tracks execution state and enforces deterministic handoffs
- Agents hand off work through shared artifact files (plans, reviews, build logs)
┌──────────────────────────────────────────────┐
│ AGENTIC ORCHESTRATOR │
│ intent + complexity + blast-radius routing │
│ validates trajectory, requests re-runs │
└──────────────────────────────────────────────┘
│
▼
┌──────────┐ ┌──────────┐ ┌─────────────┐ ┌─────────┐
│ PLANNER │ ──► │ REVIEWER │ ──► │ IMPLEMENTER │ ──► │ AUDITOR │ ──► DONE
│ plan.md │ │review.md │ │build_log.md │ │audit.md │
└──────────┘ └──────────┘ └─────────────┘ └─────────┘
▲ │ │
└── changes ─────┘ ▲ │
└── changes ────────┘
| Feature | CrewAI / LangGraph | Agent Relay |
|---|---|---|
| How you define agents | Write Python code | Write YAML config |
| Runtime | Locked to their framework | Works with any tool (Cursor, Codex, Aider, Ollama, ChatGPT) |
| Where state lives | In memory / database | In your repo (version-controlled files) |
| Human in the loop | Afterthought | First-class (relay next → copy-paste → relay advance) |
| Entry point | Install SDK, write code, configure API keys | pip install agent-relay && relay init |
Agent Relay is the glue layer between your agent tools and your workflow. It doesn't replace your agents — it coordinates them.
Everything lives in .relay/ in your repo:
.relay/
relay.yml # Global config (default workflow, backend, orchestrator)
workflows/
default/
workflow.yml # State machine definition
state.yml # Current state (auto-managed)
orchestrator_log.yml # Orchestrator decisions and context (when enabled)
roles/
planner.yml # Behavioral rules for planner agent
reviewer.yml # Behavioral rules for reviewer agent
...
artifacts/
context.md # Project context (you fill this in)
plan.md # Written by planner, read by reviewer
plan_review.md # Written by reviewer, read by planner
build_log.md # Written by implementer
...
name: "my-project"
version: 1
roles:
planner:
description: "Creates implementation plans"
writes: [plan.md]
reads: [context.md, plan_review.md]
rules: roles/planner.yml
reviewer:
description: "Reviews plans for correctness"
writes: [plan_review.md]
reads: [context.md, plan.md]
rules: roles/reviewer.yml
stages:
plan_draft: { agent: planner, next: plan_review }
plan_review: { agent: reviewer, next: { approve: done, reject: plan_changes } }
plan_changes: { agent: planner, next: plan_review }
done: { terminal: true }
initial_stage: plan_draft
limits:
max_plan_iterations: 5name: reviewer
system_prompt: |
You are a Reviewer. Critically evaluate the plan for
correctness, completeness, and feasibility.
output_format: |
## Verdict: APPROVE | REQUEST_CHANGES
## Summary: ...
## Required Changes: ...
verdict_field: "Verdict"
approve_value: "APPROVE"
reject_value: "REQUEST_CHANGES"The verdict_field enables automatic branching: Relay reads the agent's output, extracts the verdict, and follows the correct branch in the state machine.
| Command | What it does |
|---|---|
relay init |
Create a new workflow |
relay init --template plan-review-implement-audit |
Use the built-in 4-agent template |
relay init --name feature-x |
Create a named workflow (multiple per repo) |
relay status |
Show current stage, active role, iterations |
relay next |
Print the prompt for the next agent |
relay advance |
Advance to the next stage (auto-extracts verdict for branching) |
relay run |
Run with backend (manual: print + wait) |
relay run --loop |
Run the full loop until done or limit hit |
relay dash |
TUI dashboard |
relay reset |
Reset to initial stage |
relay reset --clean |
Reset + wipe artifacts |
relay validate |
Check workflow.yml for errors |
relay export cursor |
Export to Cursor .mdc rules + prompts |
Run multiple workflows in the same repo:
relay init --name ark-m0 --template plan-review-implement-audit
relay init --name ark-m1 --template plan-review-implement-audit
relay status --workflow ark-m0
relay next --workflow ark-m1The classic 4-agent loop:
- Planner creates an implementation plan
- Reviewer critically reviews it (APPROVE / REQUEST_CHANGES)
- Implementer builds it, maintaining a build log
- Auditor verifies correctness, catches shortcuts (APPROVE / REQUEST_CHANGES)
relay init --template plan-review-implement-auditMore templates coming: code-review, research-write-edit, debug-fix-verify.
If you use Cursor IDE, you can export your workflow to Cursor-native format:
relay export cursorThis generates .cursor/rules/*.mdc files and .cursor/prompts/*.txt files that you can use directly in Cursor's agent sessions.
By default, Agent Relay runs mechanically — it follows the state machine transitions without evaluating whether agents are staying on track. Enable the orchestrator to add an intelligent layer that holds your intent, evaluates agent outputs, and course-corrects when agents drift.
- Intent injection: Every agent's prompt is enriched with your project intent and a summary of prior steps
- Post-step evaluation: After each agent finishes, the orchestrator evaluates whether the output aligns with the vision
- Course correction: If an agent's output drifts, the orchestrator can request a re-run with specific feedback
- Context accumulation: A persistent log of decisions and concerns flows through the entire workflow
Add this to .relay/relay.yml:
backend: openai
orchestrator:
enabled: true
provider: openai # openai or anthropic
model: gpt-4o
intent: |
We are building a REST API for user management.
Philosophy: incremental delivery, no breaking changes, test-first.
Key constraint: must maintain backward compatibility with v1 API.Then run:
relay run --loop --backend openaiThe orchestrator makes two cheap LLM calls per step (~500 tokens each):
- Pre-step: "Should we proceed with this agent? Any context to add to the prompt?"
- Post-step: "Does this output align with the intent? Any concerns for the next agent?"
| Without (mechanical) | With orchestrator |
|---|---|
| Planner follows template blindly | Planner gets intent + prior context in every prompt |
| Reviewer checks plan structure only | Reviewer's prompt includes orchestrator concerns from prior steps |
| No one catches philosophical drift | Orchestrator flags when output misses the point |
| State machine follows transitions | Orchestrator can request re-runs before advancing |
The orchestrator log is persisted at .relay/workflows/{name}/orchestrator_log.yml and survives restarts.
Agent Relay supports multiple backends for invoking agents:
| Backend | Command | What it does |
|---|---|---|
| manual (default) | relay run |
Prints prompt, waits for you to paste into your tool and press Enter |
| openai | relay run --backend openai |
Calls OpenAI API (gpt-4o by default), writes response to artifact file |
| anthropic | relay run --backend anthropic |
Calls Anthropic API (Claude), writes response to artifact file |
| cursor | relay run --backend cursor |
Invokes Cursor CLI (requires cursor in PATH) |
# Run the entire workflow end-to-end with OpenAI
export OPENAI_API_KEY=sk-...
relay run --loop --backend openai
# Or with Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
relay run --loop --backend anthropicSet it in .relay/relay.yml:
default_workflow: default
backend: openai
backend_config:
model: gpt-4o-mini
temperature: 0.2
max_tokens: 16384pip install agent-relay[openai] # For OpenAI backend
pip install agent-relay[anthropic] # For Anthropic backend- Protocol layer: Pydantic v2 models validate
workflow.ymlandroles/*.ymlwith clear error messages - State machine: Tracks the current stage, resolves transitions (linear or branching via verdict extraction)
- Verdict extraction: Regex parses agent output for
## Verdict: APPROVEpatterns - Orchestrator (optional): LLM-powered layer that holds intent, enriches prompts, evaluates outputs, and course-corrects
- Backends: Pluggable agent invocation — manual, OpenAI, Anthropic, Cursor CLI
- TUI: Textual-based dashboard shows live workflow state
git clone https://github.com/srijansk/agent-relay.git
cd agent-relay
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytestContributions welcome. Please open an issue first to discuss what you'd like to change.
- Follow the existing code style (ruff for linting)
- Add tests for new functionality
- Ensure all tests pass before submitting a PR
MIT — see LICENSE for details.