fix(openai-transport): handle Mistral reasoning_content as non-string delta content#67292
fix(openai-transport): handle Mistral reasoning_content as non-string delta content#67292rarest wants to merge 2 commits intoopenclaw:mainfrom
Conversation
Mistral's reasoning API returns content blocks as JSON objects
(e.g. {type: "thinking", thinking: "..."}) instead of plain strings.
When choice.delta.content is a truthy object, the existing
if (choice.delta.content)
check passes, then
currentBlock.text += choice.delta.content
coerces the object to '[object Object]', corrupting the chat output.
Fix: use typeof guard so only actual string content is treated as
text. Non-string content (objects, arrays) falls through to the
getCompletionsReasoningDelta() handler, which already parses
reasoning_content from the delta structure.
Fixes openclaw#67192
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e0fd928945
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| continue; | ||
| } | ||
| if (choice.delta.content) { | ||
| if (typeof choice.delta.content === "string") { |
There was a problem hiding this comment.
Avoid short-circuiting tool call parsing on empty content delta
This condition now treats content: "" as a text delta, and the subsequent continue prevents the same chunk’s tool_calls from being processed. In OpenAI-compatible streams that emit empty-string content alongside initial tool-call tokens, the tool call is dropped entirely, which changes a tool-use turn into a plain stop/text turn and can break tool execution flow.
Useful? React with 👍 / 👎.
Greptile SummaryThis is a one-line targeted fix replacing the truthy check Confidence Score: 5/5Safe to merge — minimal one-line fix with no logic regressions; only finding is a P2 suggestion to add a regression test. The change is surgical and correct: it tightens a type guard to prevent object-to-string coercion, and the existing fallthrough path already handles the Mistral reasoning delta format. All findings are P2 (missing test). No files require special attention beyond the optional test addition in Prompt To Fix All With AIThis is a comment left during a code review.
Path: src/agents/openai-transport-stream.ts
Line: 1121
Comment:
**Missing regression test for Mistral object-content chunks**
No test covers the fixed case: a chunk where `choice.delta.content` is a plain object (e.g. `{type: "thinking", thinking: "..."}`) alongside a `reasoning_content` string field. Without it there's no automated guard against a future refactor re-introducing the coercion. A small addition to `openai-transport-stream.test.ts` covering a Mistral-style stream where one chunk carries an object `content` + `reasoning_content` delta and a later chunk carries a string `content` would fully close the loop.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "fix(openai-transport): guard delta.conte..." | Re-trigger Greptile |
| continue; | ||
| } | ||
| if (choice.delta.content) { | ||
| if (typeof choice.delta.content === "string") { |
There was a problem hiding this comment.
Missing regression test for Mistral object-content chunks
No test covers the fixed case: a chunk where choice.delta.content is a plain object (e.g. {type: "thinking", thinking: "..."}) alongside a reasoning_content string field. Without it there's no automated guard against a future refactor re-introducing the coercion. A small addition to openai-transport-stream.test.ts covering a Mistral-style stream where one chunk carries an object content + reasoning_content delta and a later chunk carries a string content would fully close the loop.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/openai-transport-stream.ts
Line: 1121
Comment:
**Missing regression test for Mistral object-content chunks**
No test covers the fixed case: a chunk where `choice.delta.content` is a plain object (e.g. `{type: "thinking", thinking: "..."}`) alongside a `reasoning_content` string field. Without it there's no automated guard against a future refactor re-introducing the coercion. A small addition to `openai-transport-stream.test.ts` covering a Mistral-style stream where one chunk carries an object `content` + `reasoning_content` delta and a later chunk carries a string `content` would fully close the loop.
How can I resolve this? If you propose a fix, please make it concise.The logging.file config key is classified as kind:"none" in BASE_RELOAD_RULES_TAIL, meaning config file changes for it take the noop hot-reload path in applyHotReload(). However, the runtime config snapshot (loadPinnedRuntimeConfig cache) was NOT updated in this path, so resolveSettings() -> readLoggingConfig() -> loadConfig() kept returning the stale STARTUP snapshot and logs continued writing to the pre-change path. Fix: when applyHotReload() sees noopPaths containing a logging.* key, call setRuntimeConfigSnapshot(nextConfig) to refresh the in-process snapshot. The next resolveSettings() call will then read the updated logging.file from disk. Fixes openclaw#67168
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c82f0d0dcc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // logging.file value instead of the stale startup snapshot. | ||
| const loggingChanged = plan.noopPaths.some((p) => p === "logging" || p.startsWith("logging.")); | ||
| if (loggingChanged) { | ||
| setRuntimeConfigSnapshot(nextConfig); |
There was a problem hiding this comment.
Preserve source snapshot when updating runtime config
Calling setRuntimeConfigSnapshot(nextConfig) here drops the paired source snapshot, even though the same hot-reload path has already activated secrets with both runtime+source snapshots. After this line runs, writeConfigFile sees hadBothSnapshots=false and skips source projection, so later config writes derived from runtime state can persist resolved values directly (including SecretRef-backed secrets) instead of preserving source refs. This makes a logging-only reload capable of causing config corruption/secret leakage in subsequent writes.
Useful? React with 👍 / 👎.
|
Related work from PRtags group Title: Open PR duplicate: [Bug]: Mistral reasoning (/think) crashes channel with [object object] due to missing stream wr…
|
|
Closing this as duplicate or superseded after Codex automated review. This mixed-scope PR should close as superseded. Current main already addresses the referenced logging.file bug through the logging config path, while the Mistral stream bug remains real but is better tracked by focused parser-side PRs, especially #68418 and related #67203. The submitted branch also has unresolved functional and security/config-write review risks. Best possible solution: Close this mixed-scope PR. Keep the shipped logging.file implementation on current main, and route the remaining Mistral parser fix through the focused canonical work in #68418 or the related #67203. A separate logging hot-reload enhancement should only be considered with a source-snapshot-preserving design. What I checked:
So I’m closing this here and keeping the remaining discussion on the canonical linked item. Codex Review notes: model gpt-5.5, reasoning high; reviewed against e962381dbf1e. |
Fix #67192 — Mistral reasoning (/think) crashes channel with [object Object]
Problem
When using
mistral/mistral-small-latestwith reasoning enabled (/think high), the channel crashes and outputsNO_REPLY[object Object][object Object]...which permanently corrupts the session.Root Cause
In
src/agents/openai-transport-stream.tsline 1121:Mistral returns
choice.delta.contentas a JSON object, not a string. The truthy check passes, and+=coerces the object to[object Object].Fix
Add a
typeofguard so only actual string content is treated as text:Non-string content falls through to
getCompletionsReasoningDelta(), which already handlesreasoning_content.Fix #67168 — logging.file config is read but not applied
Problem
Setting
logging.fileinopenclaw.jsonhas no effect — logs always write to/tmp/openclaw/regardless of configuration.Root Cause
In
src/gateway/config-reload-plan.ts,loggingis classified askind: "none"inBASE_RELOAD_RULES_TAIL. Whenlogging.filechanges:applySnapshot(nextConfig)updatescurrentConfigin memorybuildGatewayReloadPlan()puts"logging.file"innoopPathsapplyHotReload()runs with emptyhotReasons— no subsystem is updatedsetRuntimeConfigSnapshot()is never calledresolveSettings()→readLoggingConfig()→loadConfig()→loadPinnedRuntimeConfig()returns the stale startup snapshotFix
In
applyHotReload(), detect whennoopPathscontains alogging.*key and callsetRuntimeConfigSnapshot(nextConfig):Testing
/think→ no [object Object] outputlogging.filewhile gateway runs → logs switch to new path