fix(llm-task): add explicit type to input/schema params for llama.cpp compat#35463
fix(llm-task): add explicit type to input/schema params for llama.cpp compat#35463jmzlx wants to merge 2 commits intoopenclaw:mainfrom
Conversation
… compat
Type.Unknown() emits JSON Schema without a 'type' field ({description: ...}).
llama.cpp rejects this during schema-to-grammar conversion (400 Bad Request).
Replace with Type.Unsafe<unknown>({type: 'object', ...}) which follows the
existing pattern used in lobster and feishu extensions.
Fixes openclaw#35443.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fc7675acd6
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| // rejects during JSON schema-to-grammar conversion (400 Bad Request). | ||
| input: Type.Optional( | ||
| Type.Unsafe<unknown>({ | ||
| type: "object", |
There was a problem hiding this comment.
Preserve non-object JSON input support
Changing input to type: "object" narrows the tool contract from “any JSON” to objects only, which is a behavior regression for workflows that pass scalar or array payloads. createLlmTaskTool.execute still treats input as arbitrary JSON (JSON.stringify(input ?? null)), so this schema restriction can cause tool-call generation/validation to fail upstream (for example when a caller needs input to be a string/array/number/null) before execute is reached.
Useful? React with 👍 / 👎.
Greptile SummaryThis PR fixes a llama.cpp compatibility issue in the Changes:
Verification:
Confidence Score: 4/5
Last reviewed commit: fc7675a |
| input: Type.Optional( | ||
| Type.Unsafe<unknown>({ | ||
| type: "object", | ||
| description: "Optional input payload for the task.", | ||
| }), | ||
| ), |
There was a problem hiding this comment.
The change from Type.Unknown() to type: "object" introduces a subtle schema narrowing. The runtime at lines 174–180 accepts any JSON-serializable value for input (including arrays, strings, numbers, etc.), but the schema now restricts it to objects only. LLMs that enforce tool schemas will reject non-object inputs, even though the implementation handles them.
For schema, using type: "object" is correct—JSON Schema documents are always objects. But for input, consider whether this restriction matches the intended API contract. If non-object inputs should be supported, consider:
| input: Type.Optional( | |
| Type.Unsafe<unknown>({ | |
| type: "object", | |
| description: "Optional input payload for the task.", | |
| }), | |
| ), | |
| input: Type.Optional( | |
| Type.Unsafe<unknown>({ | |
| type: ["object", "array", "string", "number", "boolean", "null"], | |
| description: "Optional input payload for the task.", | |
| }), | |
| ), |
Alternatively, if only objects are intended but should remain flexible in structure:
Type.Unsafe<unknown>({
type: "object",
additionalProperties: true,
description: "Optional input payload for the task.",
})If input is always expected to be a JSON object for orchestration workflows (which is common), document this explicitly in the parameter description to avoid silent regressions.
Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/llm-task/src/llm-task-tool.ts
Line: 80-85
Comment:
The change from `Type.Unknown()` to `type: "object"` introduces a subtle schema narrowing. The runtime at lines 174–180 accepts any JSON-serializable value for `input` (including arrays, strings, numbers, etc.), but the schema now restricts it to objects only. LLMs that enforce tool schemas will reject non-object inputs, even though the implementation handles them.
For `schema`, using `type: "object"` is correct—JSON Schema documents are always objects. But for `input`, consider whether this restriction matches the intended API contract. If non-object inputs should be supported, consider:
```suggestion
input: Type.Optional(
Type.Unsafe<unknown>({
type: ["object", "array", "string", "number", "boolean", "null"],
description: "Optional input payload for the task.",
}),
),
```
Alternatively, if only objects are intended but should remain flexible in structure:
```ts
Type.Unsafe<unknown>({
type: "object",
additionalProperties: true,
description: "Optional input payload for the task.",
})
```
If `input` is always expected to be a JSON object for orchestration workflows (which is common), document this explicitly in the parameter description to avoid silent regressions.
How can I resolve this? If you propose a fix, please make it concise.- Add additionalProperties: true to input and schema Type.Unsafe definitions - Narrow TypeScript type from unknown to Record<string, unknown> - Add runtime guard rejecting arrays, strings, and scalars for input - Update input description to document object-only contract Addresses review feedback from greptile-apps and chatgpt-codex-connector on the Type.Unknown() → Type.Unsafe schema narrowing.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d634f19b25
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (input != null && (typeof input !== "object" || Array.isArray(input))) { | ||
| throw new Error("input must be a JSON object (not an array, string, or scalar)"); |
There was a problem hiding this comment.
Preserve non-object JSON inputs in llm-task
The new guard in createLlmTaskTool.execute now throws whenever input is an array or scalar, even though this tool previously accepted arbitrary JSON and still builds the prompt by JSON.stringify(input ?? null). This introduces a backward-incompatible runtime failure for existing workflows that pass list/string/number payloads (for example, classification over an array of items), and it affects all providers because the exception is raised before the model call.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
The backward-compat concern doesn't apply here — Type.Unknown() was already broken on llama.cpp backends (the whole reason for this PR). Any workflow passing non-object inputs was already getting 400s during schema-to-grammar conversion.
The runtime guard aligns the execute path with the schema contract: both now say "objects only." Without it, a programmatic caller bypassing schema validation could still pass an array/scalar that silently produces a weird prompt (INPUT_JSON: [1,2,3]) — technically not a crash, but not correct behavior either.
Keeping the guard for contract consistency. The narrowing is intentional and documented in the parameter description.
Problem
The
llm-taskplugin definesinputandschemaparameters usingType.Unknown(), which emits JSON Schema without atypefield:{"description": "Optional input payload for the task."}llama.cpp rejects this during schema-to-grammar conversion with
400 Bad Request:This breaks any agent with
llm-taskenabled that routes through llama.cpp-based backends.Fix
Replace
Type.Unknown()withType.Unsafe<unknown>({type: "object", ...}), following the existing pattern used in lobster (lobster-tool.ts:218) and feishu extensions.Verified
npx tsc --noEmit— no errors in changed file)Fixes #35443.