-
-
Notifications
You must be signed in to change notification settings - Fork 69.4k
llm-task plugin tool parameters missing type field — breaks llama.cpp backends #35443
Description
Description
The llm-task plugin tool has two parameters (input and schema) whose JSON Schema definitions contain only a description field without a type field. This is technically valid JSON Schema (implies "any type"), but llama.cpp's OpenAI-compatible endpoint rejects it with a 400 error during JSON schema-to-grammar conversion.
This breaks any agent using local LLM backends (llama.cpp / llama-server) when tools are enabled and llm-task is in the tool set.
Affected parameters
{
"input": {"description": "Optional input payload for the task."},
"schema": {"description": "Optional JSON Schema to validate the returned JSON."}
}Both are missing "type". Compare with other parameters in the same tool that work fine:
{
"prompt": {"description": "Task instruction for the LLM.", "type": "string"}
}Reproduction
Send any chat completion request through an agent with llm-task enabled, using a llama.cpp-based backend:
curl -s http://localhost:8011/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"any","messages":[{"role":"user","content":"hi"}],"max_tokens":10,
"tools":[{"type":"function","function":{"name":"llm_task","description":"test",
"parameters":{"type":"object","properties":{
"prompt":{"type":"string","description":"ok"},
"input":{"description":"no type field"},
"schema":{"description":"no type field"}
},"required":["prompt"]}}}]}'Response:
{"error":{"code":400,"message":"JSON schema conversion failed:\nUnrecognized schema: {\"description\":\"no type field\"}\nUnrecognized schema: {\"description\":\"no type field\"}","type":"invalid_request_error"}}Adding "type": "object" to both parameters resolves the error.
Impact
- Any agent with
llm-taskenabled that routes throughlocal/*TensorZero providers (or direct llama.cpp endpoints) gets a 400 on every request - The error surfaces as
HTTP 502through TensorZero since it retries then fails - Streaming requests return
data: [DONE]immediately with no content (silent failure from the caller's perspective)
Workaround
Set tools.profile: "minimal" or tools.deny: ["llm-task"] on the affected agent to exclude the tool.
Suggested fix
Add "type": "object" (or "type": ["object", "string", "number", "array", "boolean", "null"]) to the input and schema parameter definitions in the llm-task plugin tool registration.
Environment
- OpenClaw: 2026.3.2
- llama.cpp: latest (llama-server with OpenAI-compatible endpoint)
- Model: Qwen3.5-35B-A3B (also confirmed with Qwen3.5-9B)
- OS: macOS (arm64)