Summary
In a multi-channel Discord setup, different channel-bound agents/sessions can still appear to queue behind each other even when session routing is already isolated correctly.
After local reproduction and a minimal runtime patch, the bottleneck appears to be the embedded runner's global lane fallback, not session.dmScope, not followup queue mode, and not memory-lancedb-pro.
Related:
Environment
- OpenClaw: 2026.3.23-2
- Platform: Windows 10 x64
- Channels: Discord
- Setup: multiple Discord channels bound to different agents/workspaces via
bindings
Observed behavior
Two different Discord channels were used for concurrent testing.
Expected:
- different channels -> different sessions
- different sessions should be able to run concurrently
Observed before patch:
- channel A starts a long reply
- channel B sends a short request shortly after
- channel B often waits until A mostly/fully finishes before replying
This felt like session-level queueing, but the channels were already isolated.
What was ruled out
1. Session routing / dmScope
This did not appear to be the primary cause for the Discord channel case.
Why:
- docs state group/channel chats get their own session keys
- config already used channel bindings with separate agent ids/workspaces
- the issue reproduced across different Discord channels, not only DMs
2. messages.queue mode
Changing queue behavior (collect / steer) changes followup behavior, but it does not provide true parallelism across already-isolated sessions.
3. memory-lancedb-pro
This was the initial suspicion, but the final successful test strongly suggests memory is not the main bottleneck here.
Relevant code path
The likely bottleneck is in:
src/agents/pi-embedded-runner/run.ts
Current structure:
const sessionLane = resolveSessionLane(params.sessionKey?.trim() || params.sessionId);
const globalLane = resolveGlobalLane(params.lane);
...
return enqueueSession(() =>
enqueueGlobal(async () => {
...
}),
);
And in:
src/agents/pi-embedded-runner/lanes.ts
export function resolveGlobalLane(lane?: string) {
const cleaned = lane?.trim();
if (cleaned === CommandLane.Cron) {
return CommandLane.Nested;
}
return cleaned ? cleaned : CommandLane.Main;
}
This means that when no explicit lane is passed, many embedded runs from otherwise separate sessions still funnel through the same global lane (main).
That behavior matches the observed cross-channel serialization.
Minimal local patch that improved the problem
A minimal runtime patch was tested locally in the embedded runner so that the fallback global lane becomes session-scoped instead of always defaulting to main.
Patched logic
const sessionLane = resolveSessionLane(params.sessionKey?.trim() || params.sessionId);
const resolvedLaneKey = params.sessionKey?.trim() || params.sessionId;
const globalLane =
params.lane?.trim() ||
(resolvedLaneKey ? `agent-run:${resolvedLaneKey}` : resolveGlobalLane());
This preserves:
- session-level serialization for the same session
- explicit custom lane override when provided
But avoids funneling unrelated embedded runs into the same shared fallback lane.
Local test result
After applying the above minimal patch locally and restarting OpenClaw:
- one Discord channel was given a long, intentionally slow request
- another Discord channel was given a short request shortly after
- the short request returned promptly instead of obviously waiting behind the long one
Initial validation result: successful
This is still a local/manual validation, but it strongly suggests the bottleneck is the embedded global lane fallback.
Why this matters
For users running multiple independent Discord agents/channels, the current fallback can make unrelated sessions feel serialized even when routing is already correct.
This creates the impression that:
- sessions are "stuck"
- OpenClaw is effectively single-threaded across channels
- multi-agent/channel deployments do not scale as expected
Suggested direction
Consider changing the embedded runner fallback so that unrelated embedded runs do not all share CommandLane.Main unless explicitly desired.
Possible approaches:
- Session-scoped fallback global lane (minimal change)
- e.g.
agent-run:<sessionKey>
- Agent-scoped fallback global lane
- Keep current default, but only when a run is truly intended to participate in the shared main lane
Relationship to existing reports
Suggested next step
If helpful, I can open a follow-up PR with the minimal patch described above and include a regression test covering:
- two different isolated sessions
- same process
- no explicit custom lane
- expectation: they should not serialize solely due to fallback global lane selection
Summary
In a multi-channel Discord setup, different channel-bound agents/sessions can still appear to queue behind each other even when session routing is already isolated correctly.
After local reproduction and a minimal runtime patch, the bottleneck appears to be the embedded runner's global lane fallback, not
session.dmScope, not followup queue mode, and not memory-lancedb-pro.Related:
Environment
bindingsObserved behavior
Two different Discord channels were used for concurrent testing.
Expected:
Observed before patch:
This felt like session-level queueing, but the channels were already isolated.
What was ruled out
1. Session routing / dmScope
This did not appear to be the primary cause for the Discord channel case.
Why:
2.
messages.queuemodeChanging queue behavior (
collect/steer) changes followup behavior, but it does not provide true parallelism across already-isolated sessions.3. memory-lancedb-pro
This was the initial suspicion, but the final successful test strongly suggests memory is not the main bottleneck here.
Relevant code path
The likely bottleneck is in:
src/agents/pi-embedded-runner/run.tsCurrent structure:
And in:
src/agents/pi-embedded-runner/lanes.tsThis means that when no explicit lane is passed, many embedded runs from otherwise separate sessions still funnel through the same global lane (
main).That behavior matches the observed cross-channel serialization.
Minimal local patch that improved the problem
A minimal runtime patch was tested locally in the embedded runner so that the fallback global lane becomes session-scoped instead of always defaulting to
main.Patched logic
This preserves:
But avoids funneling unrelated embedded runs into the same shared fallback lane.
Local test result
After applying the above minimal patch locally and restarting OpenClaw:
Initial validation result: successful
This is still a local/manual validation, but it strongly suggests the bottleneck is the embedded global lane fallback.
Why this matters
For users running multiple independent Discord agents/channels, the current fallback can make unrelated sessions feel serialized even when routing is already correct.
This creates the impression that:
Suggested direction
Consider changing the embedded runner fallback so that unrelated embedded runs do not all share
CommandLane.Mainunless explicitly desired.Possible approaches:
agent-run:<sessionKey>Relationship to existing reports
runEmbeddedPiAgent.Suggested next step
If helpful, I can open a follow-up PR with the minimal patch described above and include a regression test covering: