fix(mistral): handle content blocks array in reasoning stream#67203
fix(mistral): handle content blocks array in reasoning stream#67203draix wants to merge 2 commits intoopenclaw:mainfrom
Conversation
…ng.file is unset at init (openclaw#67168) Root cause: resolveSettings() is called early during module load, before the gateway has finished reading openclaw.json. At that point cfg?.file is undefined so it falls back to the default rolling path in /tmp/openclaw/. That result was cached permanently in loggingState.cachedSettings. Subsequent calls (including all writes during the gateway's lifetime) hit the cache and never saw the user-configured logging.file value. Fix: introduce a `provisional` flag on ResolvedSettings. Settings are marked provisional when the file path came from the default (no user config was available at resolve time). isFileLogLevelEnabled() — the hot path that checks the cache — now skips provisional entries and re-resolves, allowing the configured path to be picked up on the next call once the gateway has loaded its config. getLogger() was already correct: it calls resolveSettings() unconditionally and uses settingsChanged() to rebuild the logger when the file path changes. Adds 4 regression tests in logger.provisional-cache.test.ts.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 008bc64ddd
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (contentBlock.type === "thinking" && typeof contentBlock.thinking === "string") { | ||
| appendThinkingDelta({ | ||
| signature: "mistral_thinking_block", | ||
| text: contentBlock.thinking, | ||
| }); |
There was a problem hiding this comment.
Flush buffered reasoning before Mistral thinking blocks
When pendingThinkingDelta is buffered from an earlier chunk (the reasoning_details + tool_calls path), this new Mistral array branch appends thinking blocks immediately without flushing that buffer first. If the next chunk starts with a thinking block (no text yet), older buffered reasoning is appended only later (on text or at stream end), so reasoning content is emitted out of order and can be merged under the wrong thinking signature. This is observable in processOpenAICompletionsStream whenever tool-call buffering is followed by Mistral delta.content thinking blocks.
Useful? React with 👍 / 👎.
Greptile SummaryThis PR fixes a real stream-parsing crash for Mistral reasoning models by detecting when
Confidence Score: 4/5Production fix is correct; test scaffolding has a type-shape mismatch that will fail the type gate before this can land. The core logic change is sound and well-tested at runtime (58 tests pass), but the three new test src/agents/openai-transport-stream.test.ts — all three new test cases need corrected Prompt To Fix All With AIThis is a comment left during a code review.
Path: src/agents/openai-transport-stream.test.ts
Line: 2035-2048
Comment:
**Incorrect `output` shape in new tests — will fail type-checking**
All three new test cases initialize `output` with `usage: { total: 0 }` and omit `stopReason`, `timestamp`, `totalTokens`, and `cost`. `processOpenAICompletionsStream` expects `output: MutableAssistantOutput`, which requires all of those fields. Existing tests (e.g. the Qwen3 case around line 1745) show the correct pattern. Without `stopReason` and `timestamp`, `pnpm tsgo` / `pnpm build` will fail.
The same issue occurs in the other two new test cases (lines ~2150 and ~2233).
```suggestion
const output = {
role: "assistant" as const,
content: [],
api: model.api,
provider: model.provider,
model: model.id,
usage: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
totalTokens: 0,
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
},
stopReason: "stop",
timestamp: Date.now(),
};
```
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "fix(mistral): handle content blocks arra..." | Re-trigger Greptile |
| const output = { | ||
| role: "assistant" as const, | ||
| content: [], | ||
| api: model.api, | ||
| provider: model.provider, | ||
| model: model.id, | ||
| usage: { | ||
| input: 0, | ||
| output: 0, | ||
| cacheRead: 0, | ||
| cacheWrite: 0, | ||
| total: 0, | ||
| }, | ||
| }; |
There was a problem hiding this comment.
Incorrect
output shape in new tests — will fail type-checking
All three new test cases initialize output with usage: { total: 0 } and omit stopReason, timestamp, totalTokens, and cost. processOpenAICompletionsStream expects output: MutableAssistantOutput, which requires all of those fields. Existing tests (e.g. the Qwen3 case around line 1745) show the correct pattern. Without stopReason and timestamp, pnpm tsgo / pnpm build will fail.
The same issue occurs in the other two new test cases (lines ~2150 and ~2233).
| const output = { | |
| role: "assistant" as const, | |
| content: [], | |
| api: model.api, | |
| provider: model.provider, | |
| model: model.id, | |
| usage: { | |
| input: 0, | |
| output: 0, | |
| cacheRead: 0, | |
| cacheWrite: 0, | |
| total: 0, | |
| }, | |
| }; | |
| const output = { | |
| role: "assistant" as const, | |
| content: [], | |
| api: model.api, | |
| provider: model.provider, | |
| model: model.id, | |
| usage: { | |
| input: 0, | |
| output: 0, | |
| cacheRead: 0, | |
| cacheWrite: 0, | |
| totalTokens: 0, | |
| cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 }, | |
| }, | |
| stopReason: "stop", | |
| timestamp: Date.now(), | |
| }; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/openai-transport-stream.test.ts
Line: 2035-2048
Comment:
**Incorrect `output` shape in new tests — will fail type-checking**
All three new test cases initialize `output` with `usage: { total: 0 }` and omit `stopReason`, `timestamp`, `totalTokens`, and `cost`. `processOpenAICompletionsStream` expects `output: MutableAssistantOutput`, which requires all of those fields. Existing tests (e.g. the Qwen3 case around line 1745) show the correct pattern. Without `stopReason` and `timestamp`, `pnpm tsgo` / `pnpm build` will fail.
The same issue occurs in the other two new test cases (lines ~2150 and ~2233).
```suggestion
const output = {
role: "assistant" as const,
content: [],
api: model.api,
provider: model.provider,
model: model.id,
usage: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
totalTokens: 0,
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
},
stopReason: "stop",
timestamp: Date.now(),
};
```
How can I resolve this? If you propose a fix, please make it concise.…ct in completions stream (openclaw#67192) Mistral's OpenAI-compatible streaming endpoint returns reasoning_content as an array of block objects rather than a flat string: {delta: {reasoning_content: [{type: "thinking", content: "..."}]}} The existing getCompletionsReasoningDelta() only checked typeof value === "string", so the array/object fell through unhandled. When the value was later used in a template literal or concatenation, JavaScript coerced it to "[object Object]", corrupting the stream and crashing the channel with repeated [object Object] output. Fix: extend getCompletionsReasoningDelta() to handle three formats for the reasoning_content / reasoning / reasoning_text fields: 1. string (existing, unchanged) 2. array of block objects: [{type:"thinking",content:"..."} | {type:"text",text:"..."} | string] 3. single plain object: {type:"thinking",content:"..."} or {type:"text",text:"..."} Adds 4 tests in openai-transport-stream.test.ts covering array blocks, plain object, string backward compat, and empty array (no crash).
008bc64 to
b697b89
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b697b89dc4
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (!loggingState.cachedSettings || (loggingState.cachedSettings as ResolvedSettings).provisional) { | ||
| loggingState.cachedSettings = settings; |
There was a problem hiding this comment.
Rebuild logger when replacing provisional cached settings
Updating loggingState.cachedSettings from isFileLogLevelEnabled without also rebuilding cachedLogger can leave those two caches out of sync. If early startup logs created a logger with the provisional default file, a later call here can swap cachedSettings to the configured file path, and then getLogger() will skip rebuilding because settingsChanged(cachedSettings, resolveSettings()) now compares two new settings while cachedLogger is still old. In the normal createSubsystemLogger path (isFileLogLevelEnabled runs before getChildLogger), this means logging.file can still be ignored after config load and logs continue writing to the stale default file.
Useful? React with 👍 / 👎.
|
Related work from PRtags group Title: Open PR duplicate: [Bug]: Mistral reasoning (/think) crashes channel with [object object] due to missing stream wr…
|
|
Thanks for the context here. I swept through the related work, and this is now duplicate or superseded. Close as superseded. The Mistral stream crash is real, but this branch is not the canonical fix because it misses structured So I’m closing this here and keeping the remaining discussion on the canonical linked item. Review detailsBest possible solution: Close this source PR in favor of #75339 or a focused replacement that normalizes structured stream content in the OpenAI-compatible transport and excludes the logging-cache changes. Do we have a high-confidence way to reproduce the issue? Yes. #67192 gives concrete Mistral Is this the best way to solve the issue? No. The PR direction is useful, but the helper-only parsing is incomplete and the unrelated logging change makes this a worse merge path than the focused normalizer in #75339. Security review: Security review cleared: The diff does not introduce CI, dependency, package-resolution, secret-handling, or downloaded-code changes; the logging concern is functional rather than supply-chain related. What I checked:
Likely related people:
Codex review notes: model gpt-5.5, reasoning high; reviewed against 0c6c1cac7691. |
|
Codex review: needs changes before merge. What this changes: The PR extends OpenAI completions stream parsing for Mistral Required change before merge: The linked crash is valid, the fix boundary is narrow, and a repair worker can produce a focused replacement or rework that deliberately excludes the unrelated logging changes. Review detailsBest possible solution: Land a focused transport fix that keeps the Mistral stream handling in Acceptance criteria:
What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against acae48b790fa. |
Summary
Fixes #67192 — Mistral
/thinkcauses[object Object]crash in channel.Root cause
Mistral's OpenAI-compatible streaming endpoint returns
reasoning_contentas an array of block objects rather than a flat string:{"delta": {"reasoning_content": [{"type": "thinking", "content": "Let me think..."}]}}getCompletionsReasoningDelta()only handledtypeof value === "string", so the array fell through unhandled → coerced to"[object Object]"in downstream template literals → channel crash.Fix
Extended
getCompletionsReasoningDelta()to handle three formats:string"Let me think..."array of blocks[{type:"thinking", content:"..."}, ...]plain object{type:"thinking", content:"..."}Tests added (4)
Fixes #67192