Skip to content

fix(mistral): handle content blocks array in reasoning stream#67203

Closed
draix wants to merge 2 commits intoopenclaw:mainfrom
draix:fix/67192-mistral-reasoning-stream-wrapper
Closed

fix(mistral): handle content blocks array in reasoning stream#67203
draix wants to merge 2 commits intoopenclaw:mainfrom
draix:fix/67192-mistral-reasoning-stream-wrapper

Conversation

@draix
Copy link
Copy Markdown

@draix draix commented Apr 15, 2026

Summary

Fixes #67192 — Mistral /think causes [object Object] crash in channel.

Root cause

Mistral's OpenAI-compatible streaming endpoint returns reasoning_content as an array of block objects rather than a flat string:

{"delta": {"reasoning_content": [{"type": "thinking", "content": "Let me think..."}]}}

getCompletionsReasoningDelta() only handled typeof value === "string", so the array fell through unhandled → coerced to "[object Object]" in downstream template literals → channel crash.

Fix

Extended getCompletionsReasoningDelta() to handle three formats:

Format Example Status
string "Let me think..." ✅ existing
array of blocks [{type:"thinking", content:"..."}, ...] ✅ new
plain object {type:"thinking", content:"..."} ✅ new

Tests added (4)

  1. Array of block objects → extracts and concatenates content
  2. Plain object single block → extracts content
  3. Plain string backward compat → unchanged
  4. Empty array → no crash, no spurious thinking block

Fixes #67192

@openclaw-barnacle openclaw-barnacle Bot added agents Agent runtime and tooling size: M labels Apr 15, 2026
…ng.file is unset at init (openclaw#67168)

Root cause: resolveSettings() is called early during module load, before the
gateway has finished reading openclaw.json. At that point cfg?.file is undefined
so it falls back to the default rolling path in /tmp/openclaw/. That result was
cached permanently in loggingState.cachedSettings.

Subsequent calls (including all writes during the gateway's lifetime) hit the
cache and never saw the user-configured logging.file value.

Fix: introduce a `provisional` flag on ResolvedSettings. Settings are marked
provisional when the file path came from the default (no user config was available
at resolve time). isFileLogLevelEnabled() — the hot path that checks the cache —
now skips provisional entries and re-resolves, allowing the configured path to be
picked up on the next call once the gateway has loaded its config.

getLogger() was already correct: it calls resolveSettings() unconditionally and
uses settingsChanged() to rebuild the logger when the file path changes.

Adds 4 regression tests in logger.provisional-cache.test.ts.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 008bc64ddd

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/agents/openai-transport-stream.ts Outdated
Comment on lines +1129 to +1133
if (contentBlock.type === "thinking" && typeof contentBlock.thinking === "string") {
appendThinkingDelta({
signature: "mistral_thinking_block",
text: contentBlock.thinking,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Flush buffered reasoning before Mistral thinking blocks

When pendingThinkingDelta is buffered from an earlier chunk (the reasoning_details + tool_calls path), this new Mistral array branch appends thinking blocks immediately without flushing that buffer first. If the next chunk starts with a thinking block (no text yet), older buffered reasoning is appended only later (on text or at stream end), so reasoning content is emitted out of order and can be merged under the wrong thinking signature. This is observable in processOpenAICompletionsStream whenever tool-call buffering is followed by Mistral delta.content thinking blocks.

Useful? React with 👍 / 👎.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 15, 2026

Greptile Summary

This PR fixes a real stream-parsing crash for Mistral reasoning models by detecting when delta.content is an array of typed blocks and routing each block to the correct appendThinkingDelta / text_delta path. The production code change is clean and well-integrated with the existing helper functions.

  • The three new test cases use usage: { total: 0 } with missing stopReason and timestamp, diverging from every pre-existing test in the file and the MutableAssistantOutput type contract — this will likely fail pnpm tsgo / pnpm build.

Confidence Score: 4/5

Production fix is correct; test scaffolding has a type-shape mismatch that will fail the type gate before this can land.

The core logic change is sound and well-tested at runtime (58 tests pass), but the three new test output objects are missing required MutableAssistantOutput fields (stopReason, timestamp, totalTokens, cost) and use total at the wrong nesting level. This is a P1 because it will block pnpm tsgo / pnpm build. Fix the test scaffolding to match the pattern at line 1745 and the score should reach 5.

src/agents/openai-transport-stream.test.ts — all three new test cases need corrected output initialization

Prompt To Fix All With AI
This is a comment left during a code review.
Path: src/agents/openai-transport-stream.test.ts
Line: 2035-2048

Comment:
**Incorrect `output` shape in new tests — will fail type-checking**

All three new test cases initialize `output` with `usage: { total: 0 }` and omit `stopReason`, `timestamp`, `totalTokens`, and `cost`. `processOpenAICompletionsStream` expects `output: MutableAssistantOutput`, which requires all of those fields. Existing tests (e.g. the Qwen3 case around line 1745) show the correct pattern. Without `stopReason` and `timestamp`, `pnpm tsgo` / `pnpm build` will fail.

The same issue occurs in the other two new test cases (lines ~2150 and ~2233).

```suggestion
    const output = {
      role: "assistant" as const,
      content: [],
      api: model.api,
      provider: model.provider,
      model: model.id,
      usage: {
        input: 0,
        output: 0,
        cacheRead: 0,
        cacheWrite: 0,
        totalTokens: 0,
        cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
      },
      stopReason: "stop",
      timestamp: Date.now(),
    };
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "fix(mistral): handle content blocks arra..." | Re-trigger Greptile

Comment on lines +2035 to +2048
const output = {
role: "assistant" as const,
content: [],
api: model.api,
provider: model.provider,
model: model.id,
usage: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
total: 0,
},
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Incorrect output shape in new tests — will fail type-checking

All three new test cases initialize output with usage: { total: 0 } and omit stopReason, timestamp, totalTokens, and cost. processOpenAICompletionsStream expects output: MutableAssistantOutput, which requires all of those fields. Existing tests (e.g. the Qwen3 case around line 1745) show the correct pattern. Without stopReason and timestamp, pnpm tsgo / pnpm build will fail.

The same issue occurs in the other two new test cases (lines ~2150 and ~2233).

Suggested change
const output = {
role: "assistant" as const,
content: [],
api: model.api,
provider: model.provider,
model: model.id,
usage: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
total: 0,
},
};
const output = {
role: "assistant" as const,
content: [],
api: model.api,
provider: model.provider,
model: model.id,
usage: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
totalTokens: 0,
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
},
stopReason: "stop",
timestamp: Date.now(),
};
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/openai-transport-stream.test.ts
Line: 2035-2048

Comment:
**Incorrect `output` shape in new tests — will fail type-checking**

All three new test cases initialize `output` with `usage: { total: 0 }` and omit `stopReason`, `timestamp`, `totalTokens`, and `cost`. `processOpenAICompletionsStream` expects `output: MutableAssistantOutput`, which requires all of those fields. Existing tests (e.g. the Qwen3 case around line 1745) show the correct pattern. Without `stopReason` and `timestamp`, `pnpm tsgo` / `pnpm build` will fail.

The same issue occurs in the other two new test cases (lines ~2150 and ~2233).

```suggestion
    const output = {
      role: "assistant" as const,
      content: [],
      api: model.api,
      provider: model.provider,
      model: model.id,
      usage: {
        input: 0,
        output: 0,
        cacheRead: 0,
        cacheWrite: 0,
        totalTokens: 0,
        cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0, total: 0 },
      },
      stopReason: "stop",
      timestamp: Date.now(),
    };
```

How can I resolve this? If you propose a fix, please make it concise.

…ct in completions stream (openclaw#67192)

Mistral's OpenAI-compatible streaming endpoint returns reasoning_content as an
array of block objects rather than a flat string:

  {delta: {reasoning_content: [{type: "thinking", content: "..."}]}}

The existing getCompletionsReasoningDelta() only checked typeof value === "string",
so the array/object fell through unhandled. When the value was later used in a
template literal or concatenation, JavaScript coerced it to "[object Object]",
corrupting the stream and crashing the channel with repeated [object Object] output.

Fix: extend getCompletionsReasoningDelta() to handle three formats for the
reasoning_content / reasoning / reasoning_text fields:
  1. string (existing, unchanged)
  2. array of block objects: [{type:"thinking",content:"..."} | {type:"text",text:"..."} | string]
  3. single plain object: {type:"thinking",content:"..."} or {type:"text",text:"..."}

Adds 4 tests in openai-transport-stream.test.ts covering array blocks, plain object,
string backward compat, and empty array (no crash).
@draix draix force-pushed the fix/67192-mistral-reasoning-stream-wrapper branch from 008bc64 to b697b89 Compare April 15, 2026 13:39
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b697b89dc4

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread src/logging/logger.ts
Comment on lines +155 to 156
if (!loggingState.cachedSettings || (loggingState.cachedSettings as ResolvedSettings).provisional) {
loggingState.cachedSettings = settings;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Rebuild logger when replacing provisional cached settings

Updating loggingState.cachedSettings from isFileLogLevelEnabled without also rebuilding cachedLogger can leave those two caches out of sync. If early startup logs created a logger with the provisional default file, a later call here can swap cachedSettings to the configured file path, and then getLogger() will skip rebuilding because settingsChanged(cachedSettings, resolveSettings()) now compares two new settings while cachedLogger is still old. In the normal createSubsystemLogger path (isFileLogLevelEnabled runs before getChildLogger), this means logging.file can still be ignored after config load and logs continue writing to the stale default file.

Useful? React with 👍 / 👎.

@prtags
Copy link
Copy Markdown

prtags Bot commented Apr 23, 2026

Related work from PRtags group capable-bison-62uf

Title: Open PR duplicate: [Bug]: Mistral reasoning (/think) crashes channel with [object object] due to missing stream wr…

Number Title
#67203* fix(mistral): handle content blocks array in reasoning stream
#67292 fix(openai-transport): handle Mistral reasoning_content as non-string delta content

* This PR

@clawsweeper
Copy link
Copy Markdown
Contributor

clawsweeper Bot commented Apr 27, 2026

Thanks for the context here. I swept through the related work, and this is now duplicate or superseded.

Close as superseded. The Mistral stream crash is real, but this branch is not the canonical fix because it misses structured delta.content, mixes in unrelated logging-cache work, and open #75339 now tracks the same parser repair more completely.

So I’m closing this here and keeping the remaining discussion on the canonical linked item.

Review details

Best possible solution:

Close this source PR in favor of #75339 or a focused replacement that normalizes structured stream content in the OpenAI-compatible transport and excludes the logging-cache changes.

Do we have a high-confidence way to reproduce the issue?

Yes. #67192 gives concrete Mistral /think high channel steps, and the source-level path is clear: array/object choice.delta.content reaches text append before reasoning parsing.

Is this the best way to solve the issue?

No. The PR direction is useful, but the helper-only parsing is incomplete and the unrelated logging change makes this a worse merge path than the focused normalizer in #75339.

Security review:

Security review cleared: The diff does not introduce CI, dependency, package-resolution, secret-handling, or downloaded-code changes; the logging concern is functional rather than supply-chain related.

What I checked:

Likely related people:

  • Peter Steinberger: Current-main blame for the central stream parser, Mistral compat patch, and Mistral provider docs points to Peter in this checkout, and prior ClawSweeper review context also routed this stream/Mistral surface to Peter from recent history. (role: recent maintainer / adjacent owner; confidence: medium; commits: 2be441062d7b, a972c9ec4547, be8c24633aaa; files: src/agents/openai-transport-stream.ts, src/agents/openai-transport-stream.test.ts, extensions/mistral/api.ts)

Codex review notes: model gpt-5.5, reasoning high; reviewed against 0c6c1cac7691.

@clawsweeper
Copy link
Copy Markdown
Contributor

clawsweeper Bot commented Apr 29, 2026

Codex review: needs changes before merge.

What this changes:

The PR extends OpenAI completions stream parsing for Mistral reasoning_content object/array blocks, adds stream regression tests, and also changes logger settings cache behavior with a new logging test.

Required change before merge:

The linked crash is valid, the fix boundary is narrow, and a repair worker can produce a focused replacement or rework that deliberately excludes the unrelated logging changes.

Review details

Best possible solution:

Land a focused transport fix that keeps the Mistral stream handling in src/agents/openai-transport-stream.ts, adds precise regression coverage in the existing stream test file, and leaves the logging-cache problem to a separate issue or PR. After that lands, the linked Mistral crash report can close as implemented and duplicate unmerged PRs can remain closed or be marked superseded.

Acceptance criteria:

  • pnpm test src/agents/openai-transport-stream.test.ts
  • pnpm exec oxfmt --check --threads=1 src/agents/openai-transport-stream.ts src/agents/openai-transport-stream.test.ts CHANGELOG.md
  • pnpm check:changed in Testbox before handoff if code/changelog changes are committed

What I checked:

Likely related people:

  • Peter Steinberger: Local blame and recent file history for the central OpenAI transport stream, stream tests, and Mistral plugin API/docs point to Peter as the current-main maintainer for this surface in the available checkout history. (role: recent maintainer / current-main owner; confidence: medium; commits: a972c9ec4547, be8c24633aaa; files: src/agents/openai-transport-stream.ts, src/agents/openai-transport-stream.test.ts, extensions/mistral/api.ts)

Remaining risk / open question:

  • Merging this PR as-is would bundle an unrelated logging-cache change with the Mistral parser fix, including a reviewed risk that cachedLogger and cachedSettings can remain out of sync.
  • The Mistral parser fix still needs focused validation for non-string delta.content, array/object reasoning_content, empty arrays, and post-tool-call reasoning/text ordering.
  • No tests were executed in this read-only review; targeted stream tests and the changed gate remain required after rework.

Codex review notes: model gpt-5.5, reasoning high; reviewed against acae48b790fa.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Mistral reasoning (/think) crashes channel with [object object] due to missing stream wrapper

2 participants