feat(openai): forward text verbosity#47106
Conversation
Greptile SummaryThis PR adds Key observations:
Confidence Score: 4/5
Prompt To Fix All With AIThis is a comment left during a code review.
Path: src/agents/openai-ws-stream.ts
Line: 604-613
Comment:
**No warning logged for invalid verbosity in WS path**
The WebSocket path silently discards invalid `textVerbosity` values (e.g. `"loud"`) with no feedback to the user. The HTTP path in `openai-stream-wrappers.ts` uses `resolveOpenAITextVerbosity`, which emits a `log.warn` when normalization fails. This inconsistency means a misconfigured verbosity will surface a warning on SSE transport but be silently ignored on WebSocket transport, making it harder to debug.
Consider calling `resolveOpenAITextVerbosity` here (after importing it from `openai-stream-wrappers.ts`) instead of the local `normalizeOpenAITextVerbosity`, which would give users the same warning regardless of transport:
```ts
// in imports at top of file
import { resolveOpenAITextVerbosity } from "./pi-embedded-runner/openai-stream-wrappers.js";
// then replace lines 604–613 with:
const textVerbosity = resolveOpenAITextVerbosity(
streamOpts as Record<string, unknown> | undefined,
);
if (textVerbosity !== undefined) {
const existingText =
extraParams.text && typeof extraParams.text === "object"
? (extraParams.text as Record<string, unknown>)
: {};
extraParams.text = { ...existingText, verbosity: textVerbosity };
}
```
As a bonus, this would eliminate the duplicate `normalizeOpenAITextVerbosity` definition in this file.
How can I resolve this? If you propose a fix, please make it concise.Last reviewed commit: 7d4aa5c |
|
Follow-up pushed to finish the user-visible/E2E gap. New commit on this PR branch: Delta in this commit:
Re-ran targeted suite:
Result: 145/145 passing. |
|
Addressed the remaining Greptile review note in What changed:
Re-ran targeted suite:
Result: 146/146 passing. |
1754a4c to
e4c94c9
Compare
|
Rebased/refresh pass is pushed on top of current What this preserves from the original PR intent:
Targeted suite re-run on refreshed head:
Result: 167/167 passing. |
e4c94c9 to
67d8d02
Compare
vincentkoc
left a comment
There was a problem hiding this comment.
Reviewed current head 67d8d02.
No findings on the rebased branch.
Verified locally:
- pnpm test -- src/agents/pi-embedded-runner-extraparams.test.ts src/agents/openai-ws-stream.test.ts src/auto-reply/status.test.ts
- pnpm check
- pnpm build
Notable follow-up from maintainer pass: /status now derives text verbosity from the same config precedence path as request application, so per-agent params no longer drift from runtime behavior.
* feat(openai): forward text verbosity across responses transports * fix(openai): remove stale verbosity rebase artifact * chore(changelog): add openai text verbosity entry --------- Co-authored-by: Ubuntu <[email protected]> Co-authored-by: Vincent Koc <[email protected]>
* feat(openai): forward text verbosity across responses transports * fix(openai): remove stale verbosity rebase artifact * chore(changelog): add openai text verbosity entry --------- Co-authored-by: Ubuntu <[email protected]> Co-authored-by: Vincent Koc <[email protected]>
Summary
text.verbosityfrom model paramstextVerbosityandtext_verbosityalias stylesWhy
OpenClaw already forwards several OpenAI-native controls such as
max_output_tokens,reasoning.effort, andservice_tier, but it was not forwardingtext.verbosity.That setting is useful when users want deeper internal reasoning with shorter external replies — i.e. "think more, say less" — without relying only on prompt wording.
What changed
low|medium|high)text.verbosityinto Responses payloads through the existing OpenAI wrapper pathpayload.textfields if already presentCloses #47105