-
-
Notifications
You must be signed in to change notification settings - Fork 69.4k
Message Processing Bottleneck with Multiple Telegram Bots (main lane concurrency limit) #16055
Description
Message Processing Bottleneck with Multiple Telegram Bots
Environment
- OpenClaw Version: 2026.2.13 (updated from 2026.2.9, issue persists)
- Platform: Linux (Ubuntu)
- Configuration: 5 independent Telegram bots (agents: ying, feng, luo, jie, kaifa)
- Setup: Each agent has its own bot token and accountId
Problem Description
When sending messages to multiple agents simultaneously, responses are delayed by 1-2 minutes. Investigation reveals that all messages are processed serially through a shared queue, even though each agent is configured as an independent bot.
Root Cause
According to /usr/lib/node_modules/openclaw/docs/concepts/queue.md:
A lane-aware FIFO queue drains each lane with a configurable concurrency cap (default 1 for unconfigured lanes; main defaults to 4, subagent to 8).
Default lane (
main) is process-wide for inbound + main heartbeats; setagents.defaults.maxConcurrentto allow multiple sessions in parallel.
The issue:
- All inbound messages from different Telegram bots queue into the same
mainlane - The
mainlane has a default concurrency of 4 - Even with
agents.defaults.maxConcurrent: 100configured, the main lane still uses the default value of 4 - This creates a bottleneck when multiple independent agents receive messages simultaneously
Current Configuration
{
"agents": {
"defaults": {
"maxConcurrent": 100,
"subagents": {
"maxConcurrent": 100
}
},
"list": [
{ "id": "ying", "name": "影·总指挥", ... },
{ "id": "feng", "name": "影·风", ... },
{ "id": "luo", "name": "影·洛", ... },
{ "id": "jie", "name": "影·杰", ... },
{ "id": "kaifa", "name": "影·墨", ... }
]
},
"bindings": [
{ "agentId": "ying", "match": { "channel": "telegram", "accountId": "ying" } },
{ "agentId": "feng", "match": { "channel": "telegram", "accountId": "feng" } },
{ "agentId": "luo", "match": { "channel": "telegram", "accountId": "luo" } },
{ "agentId": "jie", "match": { "channel": "telegram", "accountId": "jie" } },
{ "agentId": "kaifa", "match": { "channel": "telegram", "accountId": "kaifa" } }
],
"channels": {
"telegram": {
"accounts": {
"ying": { "botToken": "..." },
"feng": { "botToken": "..." },
"luo": { "botToken": "..." },
"jie": { "botToken": "..." },
"kaifa": { "botToken": "..." }
}
}
}
}Expected Behavior
Since each agent has:
- Its own Telegram bot token
- Its own accountId
- Its own binding
- Its own workspace
Messages to different agents should be processed in parallel, not serially queued.
Questions
-
How to configure main lane concurrency? The documentation mentions it's "configurable" but doesn't specify the configuration key.
-
Can agents have independent lanes? Is there a way to assign each agent to its own lane to avoid the shared queue bottleneck?
-
Verified on 2026.2.13: Updated to the latest version, but the issue persists. Is there a configuration I'm missing?
Suggested Solutions
-
Make
agents.defaults.maxConcurrentactually control main lane concurrency instead of just being a cap -
Add per-agent lane configuration:
{ "agents": { "list": [ { "id": "ying", "lane": "ying-lane", "laneConcurrency": 10 }, { "id": "feng", "lane": "feng-lane", "laneConcurrency": 10 } ] } }
-
Auto-create separate lanes for agents with different accountIds to enable true parallel processing
Impact
This bottleneck significantly degrades user experience when running multiple agents:
- User sends messages to 5 agents simultaneously
- Only 4 can process at once (main lane default)
- The 5th agent waits in queue
- Response time increases from seconds to minutes
For multi-agent setups (which OpenClaw explicitly supports), this creates an artificial serialization that defeats the purpose of having multiple independent agents.
Workaround
Currently, there's no known workaround other than:
- Running multiple OpenClaw Gateway instances (defeats the purpose of multi-agent support)
- Accepting the serialized processing (poor UX)
Thank you for OpenClaw! It's an amazing project. Looking forward to your guidance on this issue.