Skip to content

fix(llm-task): add explicit type to input/schema params for llama.cpp compat#35463

Open
jmzlx wants to merge 2 commits intoopenclaw:mainfrom
jmzlx:fix/llm-task-schema-llama-cpp
Open

fix(llm-task): add explicit type to input/schema params for llama.cpp compat#35463
jmzlx wants to merge 2 commits intoopenclaw:mainfrom
jmzlx:fix/llm-task-schema-llama-cpp

Conversation

@jmzlx
Copy link
Copy Markdown

@jmzlx jmzlx commented Mar 5, 2026

Problem

The llm-task plugin defines input and schema parameters using Type.Unknown(), which emits JSON Schema without a type field:

{"description": "Optional input payload for the task."}

llama.cpp rejects this during schema-to-grammar conversion with 400 Bad Request:

JSON schema conversion failed: Unrecognized schema: {"description":"Optional input payload for the task."}

This breaks any agent with llm-task enabled that routes through llama.cpp-based backends.

Fix

Replace Type.Unknown() with Type.Unsafe<unknown>({type: "object", ...}), following the existing pattern used in lobster (lobster-tool.ts:218) and feishu extensions.

Verified

  • Before fix: llama.cpp returns 400 on every tool-enabled request
  • After fix: llama.cpp accepts the schema, streaming works correctly
  • Tested with Qwen3.5-35B-A3B and Qwen3.5-9B via TensorZero routing
  • TypeScript compiles clean (npx tsc --noEmit — no errors in changed file)
  • Linter passes (0 warnings, 0 errors)
  • No behavior change for cloud LLM providers (schema is still valid JSON Schema, now with an explicit type)

Fixes #35443.

… compat

Type.Unknown() emits JSON Schema without a 'type' field ({description: ...}).
llama.cpp rejects this during schema-to-grammar conversion (400 Bad Request).

Replace with Type.Unsafe<unknown>({type: 'object', ...}) which follows the
existing pattern used in lobster and feishu extensions.

Fixes openclaw#35443.
@jmzlx jmzlx marked this pull request as ready for review March 5, 2026 04:53
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fc7675acd6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

// rejects during JSON schema-to-grammar conversion (400 Bad Request).
input: Type.Optional(
Type.Unsafe<unknown>({
type: "object",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve non-object JSON input support

Changing input to type: "object" narrows the tool contract from “any JSON” to objects only, which is a behavior regression for workflows that pass scalar or array payloads. createLlmTaskTool.execute still treats input as arbitrary JSON (JSON.stringify(input ?? null)), so this schema restriction can cause tool-call generation/validation to fail upstream (for example when a caller needs input to be a string/array/number/null) before execute is reached.

Useful? React with 👍 / 👎.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 5, 2026

Greptile Summary

This PR fixes a llama.cpp compatibility issue in the llm-task plugin by replacing Type.Unknown() with Type.Unsafe<unknown>({type: "object", ...}) for the input and schema tool parameters. The root cause is that Type.Unknown() emits a JSON Schema fragment without a type field, which llama.cpp rejects during its schema-to-grammar conversion step, returning a 400 error for any tool-enabled request.

Changes:

  • input and schema parameters now emit {"type": "object", "description": "..."} instead of {"description": "..."}, satisfying llama.cpp's schema parser.
  • The fix follows the existing Type.Unsafe pattern already in use in other parts of the codebase.

Verification:

  • For schema, using type: "object" is semantically correct — JSON Schema documents are always objects.
  • For input, the schema change introduces a behavioral narrowing: it now restricts callers to JSON object values only, while the original Type.Unknown() permitted any JSON-serializable value (arrays, strings, numbers, etc.). The runtime implementation (lines 174–180) explicitly accepts and serializes any JSON value, creating a gap between schema and implementation. Callers passing array or scalar inputs through llama.cpp backends will be rejected by schema validation, though the runtime could handle them. The PR description emphasizes "orchestration from Lobster workflows", where structured object data is typical, so the narrowing may be intentional—but should be explicitly documented if so.

Confidence Score: 4/5

  • Safe to merge. The fix correctly solves the llama.cpp 400 error and has been verified to work with actual LLM backends. There is a schema narrowing for the input parameter that warrants clarification in the parameter documentation.
  • The fix directly addresses the reported llama.cpp compatibility issue and has been tested with real backends (Qwen3.5 models). TypeScript compilation and linting pass. The primary concern is that the input parameter's schema now restricts to objects only, while the runtime accepts any JSON-serializable value. This is a low-risk behavioral delta for typical orchestration use cases (which expect structured data), but callers passing arrays, scalars, or other JSON types to llama.cpp backends would be silently restricted. The change is intentional and well-commented in the code, but explicit documentation of the API contract in the parameter description would clarify intent.
  • extensions/llm-task/src/llm-task-tool.ts — clarify whether input schema restriction to objects is intentional, or if broader JSON type support should be maintained for backward compatibility with potential non-object callers.

Last reviewed commit: fc7675a

Comment on lines +80 to +85
input: Type.Optional(
Type.Unsafe<unknown>({
type: "object",
description: "Optional input payload for the task.",
}),
),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change from Type.Unknown() to type: "object" introduces a subtle schema narrowing. The runtime at lines 174–180 accepts any JSON-serializable value for input (including arrays, strings, numbers, etc.), but the schema now restricts it to objects only. LLMs that enforce tool schemas will reject non-object inputs, even though the implementation handles them.

For schema, using type: "object" is correct—JSON Schema documents are always objects. But for input, consider whether this restriction matches the intended API contract. If non-object inputs should be supported, consider:

Suggested change
input: Type.Optional(
Type.Unsafe<unknown>({
type: "object",
description: "Optional input payload for the task.",
}),
),
input: Type.Optional(
Type.Unsafe<unknown>({
type: ["object", "array", "string", "number", "boolean", "null"],
description: "Optional input payload for the task.",
}),
),

Alternatively, if only objects are intended but should remain flexible in structure:

Type.Unsafe<unknown>({
  type: "object",
  additionalProperties: true,
  description: "Optional input payload for the task.",
})

If input is always expected to be a JSON object for orchestration workflows (which is common), document this explicitly in the parameter description to avoid silent regressions.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/llm-task/src/llm-task-tool.ts
Line: 80-85

Comment:
The change from `Type.Unknown()` to `type: "object"` introduces a subtle schema narrowing. The runtime at lines 174–180 accepts any JSON-serializable value for `input` (including arrays, strings, numbers, etc.), but the schema now restricts it to objects only. LLMs that enforce tool schemas will reject non-object inputs, even though the implementation handles them.

For `schema`, using `type: "object"` is correct—JSON Schema documents are always objects. But for `input`, consider whether this restriction matches the intended API contract. If non-object inputs should be supported, consider:

```suggestion
      input: Type.Optional(
        Type.Unsafe<unknown>({
          type: ["object", "array", "string", "number", "boolean", "null"],
          description: "Optional input payload for the task.",
        }),
      ),
```

Alternatively, if only objects are intended but should remain flexible in structure:

```ts
Type.Unsafe<unknown>({
  type: "object",
  additionalProperties: true,
  description: "Optional input payload for the task.",
})
```

If `input` is always expected to be a JSON object for orchestration workflows (which is common), document this explicitly in the parameter description to avoid silent regressions.

How can I resolve this? If you propose a fix, please make it concise.

- Add additionalProperties: true to input and schema Type.Unsafe definitions
- Narrow TypeScript type from unknown to Record<string, unknown>
- Add runtime guard rejecting arrays, strings, and scalars for input
- Update input description to document object-only contract

Addresses review feedback from greptile-apps and chatgpt-codex-connector
on the Type.Unknown() → Type.Unsafe schema narrowing.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d634f19b25

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +177 to +178
if (input != null && (typeof input !== "object" || Array.isArray(input))) {
throw new Error("input must be a JSON object (not an array, string, or scalar)");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve non-object JSON inputs in llm-task

The new guard in createLlmTaskTool.execute now throws whenever input is an array or scalar, even though this tool previously accepted arbitrary JSON and still builds the prompt by JSON.stringify(input ?? null). This introduces a backward-incompatible runtime failure for existing workflows that pass list/string/number payloads (for example, classification over an array of items), and it affects all providers because the exception is raised before the model call.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The backward-compat concern doesn't apply here — Type.Unknown() was already broken on llama.cpp backends (the whole reason for this PR). Any workflow passing non-object inputs was already getting 400s during schema-to-grammar conversion.

The runtime guard aligns the execute path with the schema contract: both now say "objects only." Without it, a programmatic caller bypassing schema validation could still pass an array/scalar that silently produces a weird prompt (INPUT_JSON: [1,2,3]) — technically not a crash, but not correct behavior either.

Keeping the guard for contract consistency. The narrowing is intentional and documented in the parameter description.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

llm-task plugin tool parameters missing type field — breaks llama.cpp backends

1 participant