You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Bug]: infer model run --local --prompt "" reaches the provider; --gateway correctly rejects it (Claude shows "No text output returned"; DeepSeek silently bills) #73185
Behavior bug (incorrect output/state without crash)
Beta release blocker
No
Summary
pnpm openclaw infer model run --model <id> --prompt "" (and " ", $'\n') sends the empty/whitespace user turn to the provider with no client-side validation. Symptom is provider-dependent: against the default anthropic/claude-opus-4-7, the call goes out and returns with no content, surfacing as a misleading Error: No text output returned for provider "anthropic" model "claude-opus-4-7" (exit 1) — the user is told there's no output instead of told their prompt was empty. Against deepseek/deepseek-v4-flash, the same input is silently accepted, the model generates ~3.5 KB of unrelated content, and the user is billed (exit 0). Compare: omitting --prompt entirely is correctly rejected by Commander before any provider call. The presence check exists; the content check is missing.
Steps to reproduce
Fresh checkout of openclaw at v2026.4.26 (commit 53b7e20); pnpm install && pnpm build on Node 22.22.2.
With the default model set to anthropic/claude-opus-4-7 and anthropic auth configured (Claude CLI OAuth in this run).
Run pnpm openclaw infer model run --model anthropic/claude-opus-4-7 --prompt "".
Observe Error: No text output returned for provider "anthropic" model "claude-opus-4-7" and exit 1 — but the request reached the provider; the error is downstream of dispatch.
Repeat with --prompt " " and --prompt $'\n' — same misleading error, request still goes out.
For comparison, omit --prompt entirely: pnpm openclaw infer model run --model anthropic/claude-opus-4-7 — Commander correctly errors with error: required option --prompt not specified and exits 1 before any provider call.
Optional secondary repro on a different provider to confirm root cause is in the CLI seam: configure deepseek and run pnpm openclaw infer model run --model deepseek/deepseek-v4-flash --prompt "" — completes with "ok": true, exit 0, and a ~3.5 KB unrelated assistant turn.
Expected behavior
--prompt should be validated as a non-empty, non-whitespace-only string before any provider request is dispatched. On empty or whitespace-only input the CLI should print a clear error such as error: --prompt cannot be empty or whitespace-only and exit non-zero with no provider call.
Concrete grounded reference from the same CLI/build: omitting --prompt entirely already fails fast and refuses to call the provider — infer model run --model anthropic/claude-opus-4-7 returns error: required option '--prompt <text>' not specified and exits 1. The presence check exists; only the content check is missing. Empty-string and whitespace-only inputs should follow the same fail-fast contract as the missing case, since neither carries any user intent.
Today, both default-path users (Claude → confusing "No text output returned" error after a real provider round-trip) and side-provider users (DeepSeek → silent billed completion) are funneled into the same upstream gap.
Actual behavior
$ pnpm openclaw infer model run --model anthropic/claude-opus-4-7 --prompt ""
Error: No text output returned for provider "anthropic" model "claude-opus-4-7".
ELIFECYCLE Command failed with exit code 1.
$ pnpm openclaw infer model run --model anthropic/claude-opus-4-7 --prompt " "
Error: No text output returned for provider "anthropic" model "claude-opus-4-7".
ELIFECYCLE Command failed with exit code 1.
$ pnpm openclaw infer model run --model anthropic/claude-opus-4-7
error: required option '--prompt ' not specified
ELIFECYCLE Command failed with exit code 1.
openclaw -> local transport -> anthropic
openclaw -> local transport -> deepseek
openclaw -> gateway (ws://127.0.0.1:18789) -> anthropic
openclaw -> gateway (ws://127.0.0.1:18789) -> deepseek
Additional provider/model setup details
Default model: anthropic/claude-opus-4-7 (set via pnpm openclaw models set anthropic/claude-opus-4-7).
Anthropic auth: Claude CLI OAuth (detected by openclaw doctor; no manual login required for infer model run in this environment).
DeepSeek auth: registered via openclaw models auth login --provider deepseek (no --set-default); credential under ~/.openclaw/credentials/.
No per-agent overrides beyond the default main agent. No reverse proxy, no gateway proxy chain.
Logs, screenshots, and evidence
=== Verified matrix (single shell session, OpenClaw 2026.4.26 / 53b7e20, Node 22.22.2) ===
TRANSPORT PROVIDER PROMPT EXIT SYMPTOM
local anthropic/claude-opus-4-7 "" 1 Error: No text output returned for provider "anthropic" model "claude-opus-4-7"local anthropic/claude-opus-4-7 "" 1 Same
local deepseek/deepseek-v4-flash "" 0 ok=true, output_chars=376 (billed)
local deepseek/deepseek-v4-flash "" 0 ok=true, output_chars=~300 (billed)
gateway anthropic/claude-opus-4-7 "" 1 GatewayClientRequestError: invalid agent params: at /message: must NOT have fewer than 1 characters
gateway anthropic/claude-opus-4-7 "" 1 GatewayClientRequestError: Error: Message (--message) is required
gateway deepseek/deepseek-v4-flash "" 1 Same gateway rejection
gateway deepseek/deepseek-v4-flash "" 1 Same gateway rejection
=== Local + claude (the bug, default path) ===
$ pnpm openclaw infer model run --local --model anthropic/claude-opus-4-7 --prompt ""
Error: No text output returned for provider "anthropic" model "claude-opus-4-7".
ELIFECYCLE Command failed with exit code 1.
$ pnpm openclaw infer model run --local --model anthropic/claude-opus-4-7 --prompt ""
Error: No text output returned for provider "anthropic" model "claude-opus-4-7".
$ pnpm openclaw infer model run --local --model anthropic/claude-opus-4-7 --prompt $'\n'
Error: No text output returned for provider "anthropic" model "claude-opus-4-7".
=== Local + deepseek --json (silent billing repro on permissive provider) ===
$ pnpm openclaw infer model run --local --model deepseek/deepseek-v4-flash --prompt "" --json
{
"ok": true,
"capability": "model.run",
"transport": "local",
"provider": "deepseek",
"model": "deepseek-v4-flash",
"attempts": [],
"outputs": [
{ "text": "<376-char Chinese assistant turn>", "mediaUrl": null }
]
}
=== Gateway + claude (in-tree counter-example: validation already exists) ===
$ pnpm openclaw infer model run --gateway --model anthropic/claude-opus-4-7 --prompt ""
GatewayClientRequestError: invalid agent params: at /message: must NOT have fewer than 1 characters
ELIFECYCLE Command failed with exit code 1.
$ pnpm openclaw infer model run --gateway --model anthropic/claude-opus-4-7 --prompt ""
GatewayClientRequestError: Error: Message (--message) is required
ELIFECYCLE Command failed with exit code 1.
=== Counter-example: missing --prompt is rejected by Commander on both transports ===
$ pnpm openclaw infer model run --model anthropic/claude-opus-4-7
error: required option '--prompt <text>' not specified
ELIFECYCLE Command failed with exit code 1.
=== Sanity check: Claude works correctly with real prompts on both transports ===
$ pnpm openclaw infer model run --local --model anthropic/claude-opus-4-7 --prompt "Say hello in one short sentence."
Hello, nice to meet you!
$ pnpm openclaw infer model run --gateway --model anthropic/claude-opus-4-7 --prompt "Say hello in one short sentence."
Hello! Hope your night is going well.
Impact and severity
Affected users/systems/channels:
Every operator using openclaw infer model run (alias openclaw capability model run) on the CLI. Linux directly observed (Ubuntu 24.04 / Node 22.22.2); the bug is in the CLI argument-handling seam upstream of provider/transport code, so behavior is expected on macOS/Windows installs but only Linux was directly reproduced.
Default-path users (Claude) see a confusing error that misattributes the cause to the provider rather than to their input.
Side-provider users (DeepSeek and any provider that auto-completes empty turns) get billed without a clear signal.
Severity:
For default Claude users: confusing UX. The error "No text output returned for provider 'anthropic' model 'claude-opus-4-7'" reads like a provider/model fault, sending users down the wrong debugging path.
For non-Claude users on a billing model: a real, repeatable cost component. Each accidental empty-prompt invocation is a billable round-trip producing an unrelated multi-KB response (~3.5 KB observed for one run on deepseek-v4-flash).
Easy to hit from shell scripts that interpolate an unset variable (--prompt "$VAR" with VAR=) or from accidentally-blank text in a heredoc.
Not a security issue and no data corruption observed.
Frequency:
Always, deterministic on --local. 100% reproduction across --prompt "", --prompt " ", and --prompt $'\n' for both providers tested. 0% reproduction on --gateway — that path correctly fails fast with the gateway's own JSON-schema validation. The bug is specific to --local transport; --gateway is the in-tree counter-example.
Consequence:
Misleading "No text output returned" errors blamed on provider/model when the real cause is empty user input.
On non-Claude providers: wasted provider tokens on degenerate inputs that the CLI never had to send.
Shell-script breakage: --prompt "$INPUT" with an empty $INPUT either silently bills (DeepSeek-shaped providers) or surfaces as a flapping "No text output returned" failure that triggers retries (Claude-shaped providers).
No grounded evidence of missed messages, failed onboarding, or data risk.
Additional information
Regression status: not classified as a Regression. Last-known-good not directly observed; no bisect performed.
Likely fix locus: the argument handler for infer model run (commander definition for the subcommand under src/cli/cli-infer/ or wherever the parser lives in this build). Add a validator on the --prompt value that rejects empty strings and /^\s*$/-only strings before dispatching to the capability runner. The shared CLI helpers under src/cli/program/ already validate other required-option presence; this is the same pattern but on content.
Suggested regression test: a unit test next to the infer model run action that asserts each of --prompt "", --prompt " ", and --prompt $'\n' on --local returns a non-zero exit and a "--prompt cannot be empty" error message, with the transport seam mocked to assert it is never called for empty/whitespace input. Pair with a parity test that confirms --local and --gateway produce equivalent rejection behavior for the same empty-content inputs (modulo error wording) — gateway-side validation already exists and is the in-tree reference.
Not exercised in this repro: streaming-mode requests, per-agent overrides beyond main, batched/concurrent runs. The bug is upstream of provider selection so all providers reachable through infer model run are expected to be affected, with the exact symptom shape determined by each provider's empty-turn handling.
Bug type
Behavior bug (incorrect output/state without crash)
Beta release blocker
No
Summary
pnpm openclaw infer model run --model <id> --prompt ""(and" ",$'\n') sends the empty/whitespace user turn to the provider with no client-side validation. Symptom is provider-dependent: against the defaultanthropic/claude-opus-4-7, the call goes out and returns with no content, surfacing as a misleadingError: No text output returned for provider "anthropic" model "claude-opus-4-7"(exit 1) — the user is told there's no output instead of told their prompt was empty. Againstdeepseek/deepseek-v4-flash, the same input is silently accepted, the model generates ~3.5 KB of unrelated content, and the user is billed (exit 0). Compare: omitting--promptentirely is correctly rejected by Commander before any provider call. The presence check exists; the content check is missing.Steps to reproduce
pnpm install && pnpm buildon Node 22.22.2.anthropic/claude-opus-4-7and anthropic auth configured (Claude CLI OAuth in this run).pnpm openclaw infer model run --model anthropic/claude-opus-4-7 --prompt "".Error: No text output returned for provider "anthropic" model "claude-opus-4-7"and exit 1 — but the request reached the provider; the error is downstream of dispatch.--prompt " "and--prompt $'\n'— same misleading error, request still goes out.--promptentirely:pnpm openclaw infer model run --model anthropic/claude-opus-4-7— Commander correctly errors witherror: required option--promptnot specifiedand exits 1 before any provider call.pnpm openclaw infer model run --model deepseek/deepseek-v4-flash --prompt ""— completes with"ok": true, exit 0, and a ~3.5 KB unrelated assistant turn.Expected behavior
--promptshould be validated as a non-empty, non-whitespace-only string before any provider request is dispatched. On empty or whitespace-only input the CLI should print a clear error such aserror: --prompt cannot be empty or whitespace-onlyand exit non-zero with no provider call.Concrete grounded reference from the same CLI/build: omitting
--promptentirely already fails fast and refuses to call the provider —infer model run --model anthropic/claude-opus-4-7returnserror: required option '--prompt <text>' not specifiedand exits 1. The presence check exists; only the content check is missing. Empty-string and whitespace-only inputs should follow the same fail-fast contract as the missing case, since neither carries any user intent.Today, both default-path users (Claude → confusing "No text output returned" error after a real provider round-trip) and side-provider users (DeepSeek → silent billed completion) are funneled into the same upstream gap.
Actual behavior
$ pnpm openclaw infer model run --model anthropic/claude-opus-4-7 --prompt ""
Error: No text output returned for provider "anthropic" model "claude-opus-4-7".
ELIFECYCLE Command failed with exit code 1.
$ pnpm openclaw infer model run --model anthropic/claude-opus-4-7 --prompt " "
Error: No text output returned for provider "anthropic" model "claude-opus-4-7".
ELIFECYCLE Command failed with exit code 1.
$ pnpm openclaw infer model run --model anthropic/claude-opus-4-7
error: required option '--prompt ' not specified
ELIFECYCLE Command failed with exit code 1.
$ pnpm openclaw infer model run --model deepseek/deepseek-v4-flash --prompt "" --json | jq '{ok, transport, provider, model, attempts, output_chars: (.outputs[0].text | length)}'
{
"ok": true,
"transport": "local",
"provider": "deepseek",
"model": "deepseek-v4-flash",
"attempts": [],
"output_chars": 3528
}
OpenClaw version
2026.4.26
Operating system
Ubuntu 24.04
Install method
pnpm dev (source checkout from github.com/openclaw/openclaw;
pnpm install && pnpm build; invoked viapnpm openclaw)Model
anthropic/claude-opus-4-7 / deepseek/deepseek-v4-flash
Provider / routing chain
openclaw -> local transport -> anthropic
openclaw -> local transport -> deepseek
openclaw -> gateway (ws://127.0.0.1:18789) -> anthropic
openclaw -> gateway (ws://127.0.0.1:18789) -> deepseek
Additional provider/model setup details
pnpm openclaw models set anthropic/claude-opus-4-7).openclaw doctor; no manual login required forinfer model runin this environment).openclaw models auth login --provider deepseek(no--set-default); credential under~/.openclaw/credentials/.mainagent. No reverse proxy, no gateway proxy chain.Logs, screenshots, and evidence
Impact and severity
Affected users/systems/channels:
openclaw infer model run(aliasopenclaw capability model run) on the CLI. Linux directly observed (Ubuntu 24.04 / Node 22.22.2); the bug is in the CLI argument-handling seam upstream of provider/transport code, so behavior is expected on macOS/Windows installs but only Linux was directly reproduced.Severity:
--prompt "$VAR"withVAR=) or from accidentally-blank text in a heredoc.Frequency:
Consequence:
--prompt "$INPUT"with an empty$INPUTeither silently bills (DeepSeek-shaped providers) or surfaces as a flapping "No text output returned" failure that triggers retries (Claude-shaped providers).Additional information
Regression status: not classified as a Regression. Last-known-good not directly observed; no bisect performed.
Likely fix locus: the argument handler for
infer model run(commander definition for the subcommand under src/cli/cli-infer/ or wherever the parser lives in this build). Add a validator on the--promptvalue that rejects empty strings and/^\s*$/-only strings before dispatching to the capability runner. The shared CLI helpers under src/cli/program/ already validate other required-option presence; this is the same pattern but on content.Suggested regression test: a unit test next to the infer model run action that asserts each of --prompt "", --prompt " ", and --prompt $'\n' on --local returns a non-zero exit and a "--prompt cannot be empty" error message, with the transport seam mocked to assert it is never called for empty/whitespace input. Pair with a parity test that confirms --local and --gateway produce equivalent rejection behavior for the same empty-content inputs (modulo error wording) — gateway-side validation already exists and is the in-tree reference.
Dedupe checked against the openclaw issue corpus on 2026-04-28: no existing open or closed issue matches. Closest text match is [Bug]: safeguard compaction makes LLM API call before checking for real messages — 48 unnecessary calls/day on idle sessions #34935 (closed) about safeguard compaction making an LLM call before checking for real messages — different code path (compaction, not CLI infer) but conceptually related (avoid billed calls on empty input). Could be cross-linked but is not a duplicate.
Not exercised in this repro: streaming-mode requests, per-agent overrides beyond
main, batched/concurrent runs. The bug is upstream of provider selection so all providers reachable throughinfer model runare expected to be affected, with the exact symptom shape determined by each provider's empty-turn handling.