-
-
Notifications
You must be signed in to change notification settings - Fork 69.4k
Feature Request: Parallel Session Processing #1159
Description
Feature Request: Parallel Session Processing
Summary
Add configurable concurrent session processing to allow multiple Telegram topics/channels to be handled simultaneously, preventing long-running tasks in one session from blocking quick responses in others.
Problem Statement
Current Behavior
When using Clawdbot with multiple Telegram group topics (or multiple channels), all incoming messages are processed through a single serial queue:
Timeline:
├─ 00:00 Topic A: "Research the latest YOLO models and compare them" (arrives)
├─ 00:01 Topic B: "What time is it?" (arrives, queued)
├─ 00:30 Topic A: Response sent (30s processing)
└─ 00:31 Topic B: Response sent (waited 30s for a 1s task)
The queue depth indicator (🪢 Queue: collect (depth 1)) shows this backlog, but there's no way to process sessions in parallel.
Impact
- Poor UX for multi-topic setups: Quick questions wait behind long tasks
- Single user, multiple contexts: Personal assistants often span Work, Home, Projects topics
- Perceived unresponsiveness: Users in Topic B don't know why the bot is "slow"
Proposed Solution
Configuration Option
Add a sessionConcurrency setting to control parallel session processing:
{
"agents": {
"defaults": {
"sessionConcurrency": 3
}
}
}Or per-agent:
{
"agents": {
"main": {
"sessionConcurrency": 3
},
"work": {
"sessionConcurrency": 1
}
}
}Behavior
| Setting | Behavior |
|---|---|
1 (default) |
Current serial processing |
2-N |
Process up to N different sessions simultaneously |
0 or unlimited |
No limit (use with caution) |
Key Constraint
Same-session messages remain serial - only different sessions run in parallel. This preserves conversation coherence:
✅ Topic A message 1 and Topic B message 1 → parallel
❌ Topic A message 1 and Topic A message 2 → serial (same session)
Technical Considerations
1. API Rate Limits
Parallel processing increases API call frequency. Consider:
- Configurable rate limiter shared across workers
- Per-provider rate tracking (Anthropic, OpenAI, etc.)
- Graceful backoff when limits hit
{
"agents": {
"defaults": {
"sessionConcurrency": 3,
"rateLimitStrategy": "shared" // or "per-session"
}
}
}2. Workspace/File Access
Multiple sessions may read/write to the same workspace. Options:
- Read-only safe: Most workspace reads are safe to parallelize
- Write coordination: Lock files during writes, or use atomic operations
- Session isolation: Each session gets a scratch directory
3. Resource Management
- Memory: Each concurrent session holds its own context
- CPU: LLM calls are I/O-bound (waiting for API), so parallelism is efficient
- Suggested default:
sessionConcurrency: 2balances responsiveness vs resources
4. Queue Management
Current queue logic would change from:
Global Queue → Single Worker → Sessions
To:
Per-Session Queues → Worker Pool (size N) → Sessions
5. Tool Execution
Some tools may need coordination:
exec/bash: Generally safe (OS handles process isolation)- File writes: May need locking
- External APIs: Rate limit awareness
Use Cases
1. Personal Assistant (Primary Use Case)
Single user with multiple Telegram topics:
- 💬 General Chat
- 💼 Work
- 🏠 Home/Family
- 🔧 Tech/Dev
User expects quick responses regardless of which topic has a long-running task.
2. Multi-User Bot
Small team sharing a Clawdbot instance:
- User A asks complex question
- User B shouldn't wait for User A's response
3. Cross-Platform
Same agent serving Telegram DM + Discord + Slack:
- Platforms processed independently
- Natural parallelism expectation
Alternative Approaches Considered
1. Multiple Clawdbot Instances
Current workaround: Run separate instances on different ports.
clawdbot gateway --port 18789 # Topics A, B
clawdbot gateway --port 18790 # Topics C, DDownsides:
- Multiple configs to maintain
- 2-3x resource usage
- Complex routing rules
- Harder to coordinate shared state
2. Aggressive Sub-Agent Spawning
Current workaround: Spawn background agents for any task > 10s.
Downsides:
- Adds latency (spawn overhead)
- Not all tasks are spawn-able
- Doesn't solve queue blocking during spawn decision
3. Priority Queues
Alternative feature: Priority levels per topic/session.
{
"telegram": {
"topics": {
"1": { "priority": "high" },
"7": { "priority": "normal" }
}
}
}Could complement parallelism but doesn't solve blocking alone.
Implementation Suggestions
Phase 1: Basic Parallelism
- Add
sessionConcurrencyconfig - Worker pool with configurable size
- Per-session queue isolation
Phase 2: Rate Limit Coordination
- Shared rate limiter across workers
- Provider-aware throttling
Phase 3: Resource Optimization
- Dynamic worker scaling based on load
- Memory-aware concurrency limits
Example Configuration
{
"agents": {
"defaults": {
"workspace": "/root/clawd",
"sessionConcurrency": 3,
"sessionTimeout": 300
}
},
"gateway": {
"workerPool": {
"minWorkers": 1,
"maxWorkers": 5,
"idleTimeout": 60
}
}
}Environment
- Clawdbot Version: 2026.1.16-2
- Platform: Telegram (groups with topics)
- Setup: Single user, 5+ active topics
- Host: VPS (4GB RAM, 2 vCPU)
Related
- Current workaround:
sessions_spawnfor background tasks - Queue depth visible in
/statusoutput
Thank you for considering this feature! Clawdbot is already fantastic for multi-channel setups - parallel session processing would make it even more responsive. 🦞
Submitted by:
- 🧑💻 Luka Radisic (@LukaRadisic)
- 🤖 Emma (Luka's AI assistant, powered by Clawdbot 🦞)
"We built this feature request together - Luka identified the pain point, Emma documented it."