fix(agents): support parallel_tool_calls config per model#37201
fix(agents): support parallel_tool_calls config per model#37201Sid-Qin wants to merge 1 commit intoopenclaw:mainfrom
Conversation
OpenClaw sends parallel_tool_calls:true to all OpenAI-compatible providers via the upstream pi-agent-core library. Models that don't support parallel tool calling (e.g. kimi-k2.5 on NVIDIA NIM) return 400, breaking all tool execution. Read parallel_tool_calls from model params and inject it into the LLM request payload via an onPayload wrapper, letting users set it to false per provider/model. Closes openclaw#37048 Made-with: Cursor
🔒 Aisle Security AnalysisWe found 1 potential security issue(s) in this PR:
1. 🔵 Unscoped
|
| Property | Value |
|---|---|
| Severity | Low |
| CWE | CWE-400 |
| Location | src/agents/pi-embedded-runner/extra-params.ts:966-982 |
Description
applyExtraParamsToAgent() installs a stream wrapper that unconditionally mutates the outgoing request payload object by adding a top-level parallel_tool_calls field whenever it is configured in model params.
Key issues:
- Provider/API not gated: the wrapper runs regardless of
model.api/provider (Anthropic, Google, Bedrock, Ollama native/api/chat, OpenAI Responses WebSocket, etc.). Many non-OpenAI APIs use strict request-body schemas and may reject unknown top-level fields with a 4xx. - Mutation affects the real upstream request (not just logging): e.g. in
openai-ws-stream.ts,options.onPayload(payload)is called beforesession.manager.send(payload). Since this wrapper mutatespayloadinsideonPayload, the injected field is included in the WebSocketresponse.createmessage sent to OpenAI. - Availability risk (DoS via retry loops / repeated failures): When a provider rejects the unknown field, the run loop can re-attempt/failover across profiles/models (up to high retry limits). A misconfigured
parallel_tool_callson an incompatible provider/model can therefore cause repeated failing requests, resource consumption, and degraded service.
Vulnerable code (payload injection):
onPayload: (payload) => {
if (payload && typeof payload === "object") {
(payload as Record<string, unknown>).parallel_tool_calls = parallelToolCalls;
}
originalOnPayload?.(payload);
}Provider-specific compatibility risks (examples):
google-generative-ai: request bodies are structured differently (often nestedconfigobjects); unknown top-level keys are likely invalid.anthropic-messages/ Bedrock non-OpenAI adapters: generally strict schemas; unknown fields can cause 400s.ollamanative/api/chat: request schema ismodel/messages/stream/tools/options; an extra root key may be rejected by some versions.- OpenAI Responses WS (
response.create): ifparallel_tool_callsis not accepted for this event type/version, this breaks the WS transport entirely.
Recommendation
Gate injection to APIs/providers that are known to support this field, and avoid mutating payloads for strict non-OpenAI request schemas.
Suggested fix (example allowlist by model.api):
const PARALLEL_TOOL_CALLS_APIS = new Set(["openai-completions", "openai-responses"]);
onPayload: (payload) => {
if (
PARALLEL_TOOL_CALLS_APIS.has(model.api) &&
payload &&
typeof payload === "object"
) {
(payload as Record<string, unknown>).parallel_tool_calls = parallelToolCalls;
}
originalOnPayload?.(payload);
}Additionally:
- Consider an explicit provider allowlist (e.g.,
openai,openrouter, other confirmed OpenAI-compatible backends) since someopenai-completionsadapters are strict. - Add tests ensuring this field is not injected for
anthropic-messages,google-generative-ai,ollama, etc. - If unsupported-field errors occur, ensure they are not retried excessively (fail fast / classify as non-retryable configuration error).
Analyzed PR: #37201 at commit 055753b
Last updated on: 2026-03-06T05:24:26Z
Greptile SummaryThis PR adds a
Confidence Score: 4/5
Last reviewed commit: 055753b |
| describe("parallel_tool_calls wrapper", () => { | ||
| function capturePayload(provider: string, modelId: string, paramValue: boolean) { | ||
| let captured: Record<string, unknown> = {}; | ||
| const fakeStreamFn: StreamFn = (_model, _context, options) => { | ||
| options?.onPayload?.(captured); | ||
| return undefined as never; | ||
| }; | ||
| const agent = { streamFn: fakeStreamFn }; | ||
| applyExtraParamsToAgent( | ||
| agent, | ||
| { | ||
| agents: { | ||
| defaults: { | ||
| models: { | ||
| [`${provider}/${modelId}`]: { | ||
| params: { parallel_tool_calls: paramValue }, | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| provider, | ||
| modelId, | ||
| ); | ||
| void agent.streamFn( | ||
| { api: "openai-completions", provider, id: modelId } as Model<"openai-completions">, | ||
| {} as Context, | ||
| {}, | ||
| ); | ||
| return captured; | ||
| } | ||
|
|
||
| it("injects parallel_tool_calls=false when configured", () => { | ||
| const payload = capturePayload("nvidia-nim", "moonshotai/kimi-k2.5", false); | ||
| expect(payload.parallel_tool_calls).toBe(false); | ||
| }); | ||
|
|
||
| it("injects parallel_tool_calls=true when configured", () => { | ||
| const payload = capturePayload("openai", "gpt-4", true); | ||
| expect(payload.parallel_tool_calls).toBe(true); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Missing "no config" backward-compatibility test
The two tests verify that parallel_tool_calls is injected when it is explicitly configured, but there is no automated test for the most critical case: when the param is absent from the config, the payload should remain unmodified. The PR description mentions this was verified manually, but codifying it prevents future regressions. Consider adding:
it("does not inject parallel_tool_calls when not configured", () => {
let captured: Record<string, unknown> = {};
const fakeStreamFn: StreamFn = (_model, _context, options) => {
options?.onPayload?.(captured);
return undefined as never;
};
const agent = { streamFn: fakeStreamFn };
applyExtraParamsToAgent(
agent,
{ agents: { defaults: { models: {} } } },
"nvidia-nim",
"moonshotai/kimi-k2.5",
);
void agent.streamFn(
{ api: "openai-completions", provider: "nvidia-nim", id: "moonshotai/kimi-k2.5" } as Model<"openai-completions">,
{} as Context,
{},
);
expect(captured.parallel_tool_calls).toBeUndefined();
});Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/pi-embedded-runner-extraparams.test.ts
Line: 1357-1398
Comment:
**Missing "no config" backward-compatibility test**
The two tests verify that `parallel_tool_calls` is injected when it is explicitly configured, but there is no automated test for the most critical case: when the param is **absent** from the config, the payload should remain unmodified. The PR description mentions this was verified manually, but codifying it prevents future regressions. Consider adding:
```ts
it("does not inject parallel_tool_calls when not configured", () => {
let captured: Record<string, unknown> = {};
const fakeStreamFn: StreamFn = (_model, _context, options) => {
options?.onPayload?.(captured);
return undefined as never;
};
const agent = { streamFn: fakeStreamFn };
applyExtraParamsToAgent(
agent,
{ agents: { defaults: { models: {} } } },
"nvidia-nim",
"moonshotai/kimi-k2.5",
);
void agent.streamFn(
{ api: "openai-completions", provider: "nvidia-nim", id: "moonshotai/kimi-k2.5" } as Model<"openai-completions">,
{} as Context,
{},
);
expect(captured.parallel_tool_calls).toBeUndefined();
});
```
How can I resolve this? If you propose a fix, please make it concise.
Summary
parallel_tool_calls: truein LLM requests to all OpenAI-compatible providers. Models that don't support parallel tool calling (e.g.moonshotai/kimi-k2.5on NVIDIA NIM) return a 400 error, breaking ALL tool execution for the agent.parallel_tool_calls.createParallelToolCallsWrapperthat readsparallel_tool_callsfrom model params and injects it into the LLM request payload viaonPayload. Users can now setparallel_tool_calls: falseper provider/model in their config.Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
User-visible / Behavior Changes
parallel_tool_calls: falseper model to disable parallel tool calling for providers that don't support it:{ "agents": { "defaults": { "models": { "nvidia-nim/moonshotai/kimi-k2.5": { "params": { "parallel_tool_calls": false } } } } } }Security Impact (required)
NoNoNo — same endpoint, modified request body fieldNoNoRepro + Verification
Environment
Steps
parallel_tool_calls: falsein model paramsExpected
Actual
parallel_tool_calls: falseinjected into payload, single tool calls workEvidence
Two new tests: "injects parallel_tool_calls=false when configured" and "injects parallel_tool_calls=true when configured" verify the wrapper.
Human Verification (required)
parallel_tool_calls: false(injected),parallel_tool_calls: true(injected), no param set (no wrapper applied, default behavior preserved)Compatibility / Migration
Yes — only applies when parallel_tool_calls is explicitly set in model paramsNo — new optional paramNoFailure Recovery (if this breaks)
parallel_tool_callsfrom model params, or revert this commitsrc/agents/pi-embedded-runner/extra-params.tsRisks and Mitigations
parallel_tool_calls: falsefor models that support it may reduce performance. Mitigation: The setting is per-model and opt-in only.