A Flask web app where you @mention AI agents into a live group chat.
Local models via Ollama · Claude API on demand · Streaming debates · Obsidian memory
Imagine a group chat where everyone at the table is an AI — each with a different personality, model, and reasoning style. You type a message, mention the agents you want, and they all respond. You can spark a structured debate, run a free-form group discussion via live streaming, or pull in Claude as the senior voice in the room.
- @mention routing — only the agents you tag reply
- Debate mode — structured 3-round argument with a final summary
- Free Talk — agents stream a live discussion on any topic via SSE
- Memory — save meeting notes directly to an Obsidian vault
- @claude — Claude API joins as the "headmaster" on demand
All local agents run any Ollama-compatible model — swap by editing the "model" field in agents.py. Defaults are chosen to run under 8GB VRAM.
| Agent | Default Model | VRAM | Personality |
|---|---|---|---|
@mistral |
mistral |
~4 GB | Sharp analytical thinker |
@phi3 |
phi3 |
~2 GB | Creative lateral thinker |
@gemma2 |
gemma2:2b |
~1.5 GB | Balanced careful summarizer |
@deepseek |
deepseek-r1:7b |
~4.7 GB | Deep step-by-step reasoner |
@claude |
claude-sonnet-4-6 |
API | Collaborative nuanced advisor |
Swap a model: open
agents.py→ change the"model"value to anything fromollama list.
# 1. Clone
git clone https://github.com/Ghraven/agent-meeting-room
cd agent-meeting-room
# 2. Install dependencies
pip install -r requirements.txt
# 3. Pull Ollama models
ollama pull mistral
ollama pull phi3
ollama pull gemma2:2b
ollama pull deepseek-r1:7b
# 4. Set up environment
cp .env.example .env
# Edit .env — add your Anthropic API key (only needed for @claude)
# 5. Run
python app.py
# Windows: double-click start.bat| What you type | What happens |
|---|---|
@mistral explain quantum computing |
Only Mistral replies |
@phi3 @gemma2 brainstorm ideas |
Phi3 and Gemma2 reply |
@all what should I build next? |
All local agents reply |
@claude review this plan |
Claude API responds |
@debate is AI good or bad? |
3-round structured debate |
| (no mention) | All local agents reply |
For Free Talk, click the Free Talk button → give a topic → agents discuss live in real time.
agent-meeting-room/
├── app.py Flask routes and SSE streaming
├── agents.py Agent definitions, Ollama + Claude calls, debate logic
├── memory.py Obsidian vault integration
├── templates/
│ └── index.html Single-page frontend (Vanilla JS + SSE)
├── start.bat Windows one-click launcher
├── .env.example Environment variable template
└── requirements.txt
| Requirement | Notes |
|---|---|
| Python 3.11+ | |
| Ollama | Must be running on port 11434 |
| Anthropic API key | Only needed for @claude — optional |
| Obsidian | Optional — for memory/note saving |
Backend: Python · Flask · Server-Sent Events
Local AI: Ollama (Mistral · Phi3 · Gemma2 · DeepSeek)
Cloud AI: Anthropic Claude API
Frontend: Vanilla JS · SSE streaming
Memory: Obsidian Markdown vault
MIT — see LICENSE
See CONTRIBUTING.md — adding a new agent takes about 5 lines.
See CHANGELOG.md for release history.
