Skip to content

feat(openai): forward text verbosity#47106

Merged
vincentkoc merged 3 commits intoopenclaw:mainfrom
merc1305:fix/openai-text-verbosity
Mar 30, 2026
Merged

feat(openai): forward text verbosity#47106
vincentkoc merged 3 commits intoopenclaw:mainfrom
merc1305:fix/openai-text-verbosity

Conversation

@merc1305
Copy link
Copy Markdown
Contributor

Summary

  • forward OpenAI Responses text.verbosity from model params
  • support both textVerbosity and text_verbosity alias styles
  • add payload + websocket coverage for text verbosity forwarding

Why

OpenClaw already forwards several OpenAI-native controls such as max_output_tokens, reasoning.effort, and service_tier, but it was not forwarding text.verbosity.

That setting is useful when users want deeper internal reasoning with shorter external replies — i.e. "think more, say less" — without relying only on prompt wording.

What changed

  • added OpenAI text verbosity normalization (low|medium|high)
  • injected text.verbosity into Responses payloads through the existing OpenAI wrapper path
  • preserved existing payload.text fields if already present
  • supported null override suppression in extra param precedence flow
  • forwarded text verbosity in the OpenAI websocket path for parity
  • added targeted tests

Closes #47105

@openclaw-barnacle openclaw-barnacle bot added agents Agent runtime and tooling size: M labels Mar 15, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 15, 2026

Greptile Summary

This PR adds text.verbosity forwarding to the OpenAI Responses API path, following the same pattern already used for service_tier and reasoning.effort. Both alias styles (textVerbosity / text_verbosity) are supported, null-override suppression is implemented, and existing text payload fields are preserved during injection. Test coverage is thorough across the payload mutation, alias canonicalization, invalid-value warning, and null-suppression cases.

Key observations:

  • The private normalizeOpenAITextVerbosity function is duplicated between openai-ws-stream.ts and openai-stream-wrappers.ts with identical logic. More importantly, the WebSocket send path uses the local copy, which silently discards invalid verbosity values without logging a warning. The HTTP/payload wrapper path correctly uses resolveOpenAITextVerbosity and emits a log.warn for the same scenario. Consolidating to a single shared utility (or importing resolveOpenAITextVerbosity in openai-ws-stream.ts) would close the diagnostic gap and remove the duplication.
  • The createOpenAITextVerbosityWrapper guard (model.api !== "openai-responses") is intentionally broader than createOpenAIServiceTierWrapper (which also checks model.provider === "openai" and the base URL). This is consistent with how createOpenAIResponsesContextManagementWrapper handles the same scope, so it appears deliberate.
  • The resolveAliasedParamValue + null-suppression flow in applyExtraParamsToAgent correctly handles all four cases: no value, valid value, invalid value, and null override.

Confidence Score: 4/5

  • Safe to merge with one minor inconsistency to address: invalid verbosity values are silently dropped on the WebSocket path where the HTTP path would log a warning.
  • The implementation is well-structured, closely mirrors existing patterns (service_tier, reasoning.effort), and has good test coverage across edge cases. The only notable gap is the missing log.warn for invalid textVerbosity values when using the WebSocket transport, which won't cause a runtime error but can make misconfigured settings harder to diagnose. No data-loss or correctness bugs were found.
  • src/agents/openai-ws-stream.ts — the duplicated normalizeOpenAITextVerbosity and the missing warning for invalid values on the WS path.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/agents/openai-ws-stream.ts
Line: 604-613

Comment:
**No warning logged for invalid verbosity in WS path**

The WebSocket path silently discards invalid `textVerbosity` values (e.g. `"loud"`) with no feedback to the user. The HTTP path in `openai-stream-wrappers.ts` uses `resolveOpenAITextVerbosity`, which emits a `log.warn` when normalization fails. This inconsistency means a misconfigured verbosity will surface a warning on SSE transport but be silently ignored on WebSocket transport, making it harder to debug.

Consider calling `resolveOpenAITextVerbosity` here (after importing it from `openai-stream-wrappers.ts`) instead of the local `normalizeOpenAITextVerbosity`, which would give users the same warning regardless of transport:

```ts
// in imports at top of file
import { resolveOpenAITextVerbosity } from "./pi-embedded-runner/openai-stream-wrappers.js";

// then replace lines 604–613 with:
      const textVerbosity = resolveOpenAITextVerbosity(
        streamOpts as Record<string, unknown> | undefined,
      );
      if (textVerbosity !== undefined) {
        const existingText =
          extraParams.text && typeof extraParams.text === "object"
            ? (extraParams.text as Record<string, unknown>)
            : {};
        extraParams.text = { ...existingText, verbosity: textVerbosity };
      }
```

As a bonus, this would eliminate the duplicate `normalizeOpenAITextVerbosity` definition in this file.

How can I resolve this? If you propose a fix, please make it concise.

Last reviewed commit: 7d4aa5c

@merc1305
Copy link
Copy Markdown
Contributor Author

Follow-up pushed to finish the user-visible/E2E gap.

New commit on this PR branch: b04cbcc215

Delta in this commit:

  1. createOpenAITextVerbosityWrapper() now also applies to openai-codex-responses.
  2. Codex path now force-applies configured verbosity (so explicit model param overrides Codex default medium).
  3. Added tests:
    • Codex responses receive configured verbosity.
    • OpenAI responses preserve caller-specified payload.text.verbosity.
  4. /status now surfaces configured text verbosity as Text: <level> for the active model.
  5. Added status test for text verbosity visibility.

Re-ran targeted suite:

  • src/agents/pi-embedded-runner-extraparams.test.ts
  • src/agents/openai-ws-stream.test.ts
  • src/auto-reply/status.test.ts

Result: 145/145 passing.

@merc1305
Copy link
Copy Markdown
Contributor Author

Addressed the remaining Greptile review note in 1754a4c306.

What changed:

  • removed the duplicate WS-local normalizeOpenAITextVerbosity() helper
  • reused shared resolveOpenAITextVerbosity() in openai-ws-stream.ts
  • invalid WS textVerbosity values now emit the same log.warn(...) as the HTTP path
  • added WS test coverage for the invalid-value warning case

Re-ran targeted suite:

  • src/agents/pi-embedded-runner-extraparams.test.ts
  • src/agents/openai-ws-stream.test.ts
  • src/auto-reply/status.test.ts

Result: 146/146 passing.

@merc1305 merc1305 force-pushed the fix/openai-text-verbosity branch from 1754a4c to e4c94c9 Compare March 15, 2026 16:45
@merc1305
Copy link
Copy Markdown
Contributor Author

Rebased/refresh pass is pushed on top of current upstream/main (force-pushed head: e4c94c910e).

What this preserves from the original PR intent:

  • forwards textVerbosity / text_verbosity for Responses payloads
  • applies configured text verbosity for both OpenAI Responses and Codex Responses
    • Codex path intentionally overrides existing payload.text.verbosity
    • OpenAI Responses path preserves caller-provided payload.text.verbosity
  • WS transport now uses shared resolveOpenAITextVerbosity()
    • invalid WS values emit the same warning as HTTP path
  • status output shows configured text verbosity as Text: <level>

Targeted suite re-run on refreshed head:

  • src/agents/pi-embedded-runner-extraparams.test.ts
  • src/agents/openai-ws-stream.test.ts
  • src/auto-reply/status.test.ts

Result: 167/167 passing.

@vincentkoc vincentkoc force-pushed the fix/openai-text-verbosity branch from e4c94c9 to 67d8d02 Compare March 30, 2026 02:52
@vincentkoc vincentkoc self-assigned this Mar 30, 2026
Copy link
Copy Markdown
Member

@vincentkoc vincentkoc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed current head 67d8d02.

No findings on the rebased branch.

Verified locally:

  • pnpm test -- src/agents/pi-embedded-runner-extraparams.test.ts src/agents/openai-ws-stream.test.ts src/auto-reply/status.test.ts
  • pnpm check
  • pnpm build

Notable follow-up from maintainer pass: /status now derives text verbosity from the same config precedence path as request application, so per-agent params no longer drift from runtime behavior.

@vincentkoc vincentkoc merged commit a6bc51f into openclaw:main Mar 30, 2026
8 checks passed
alexjiang1 pushed a commit to alexjiang1/openclaw that referenced this pull request Mar 31, 2026
* feat(openai): forward text verbosity across responses transports

* fix(openai): remove stale verbosity rebase artifact

* chore(changelog): add openai text verbosity entry

---------

Co-authored-by: Ubuntu <[email protected]>
Co-authored-by: Vincent Koc <[email protected]>
pgondhi987 pushed a commit to pgondhi987/openclaw that referenced this pull request Mar 31, 2026
* feat(openai): forward text verbosity across responses transports

* fix(openai): remove stale verbosity rebase artifact

* chore(changelog): add openai text verbosity entry

---------

Co-authored-by: Ubuntu <[email protected]>
Co-authored-by: Vincent Koc <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: L

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Forward OpenAI Responses text.verbosity from model params

2 participants