Skip to content

[Bug]: API Protocol Error 400 with Azure Reasoning Models in v2026.3.23 (works in v2026.3.18) #53506

@zjunsen

Description

@zjunsen

Summary

In OpenClaw v2026.3.23, using Azure OpenAI reasoning models (e.g., gpt-5.2) with "api": "openai-responses" triggers a 400 protocol error regarding missing items following a reasoning block. This configuration followed the official Microsoft Foundry integration guide and was fully functional in v2026.3.18.

Problem to solve

Streaming Protocol Breakage: The openai-responses handler in the latest version fails to stitch reasoning segments with their subsequent content segments, throwing: 400 Item 'rs_...' of type 'reasoning' was provided without its required following item.

Metadata Loss on Fallback: Switching to "api": "openai-completions" bypasses the error but results in unknown token counts in openclaw status. Consequently, the Control WebUI shows $0 cost, breaking the usage monitoring and budget tracking features.

Proposed solution

Restore the streaming logic for openai-responses that was present in v2026.3.18 to correctly handle the reasoning-to-content sequence.

Ensure that token usage and cost metadata are correctly extracted from the Azure response headers/payload even when using reasoning models.

Alternatives considered

Downgrading to openai-completions: It "works" for chatting, but it's not a viable solution because it breaks the cost/token tracking system.

Manual reasoning toggles: Setting "reasoning": true or false in config.json does not resolve the 400 error in v2026.3.23.

Impact

All users following the Microsoft Foundry integration standards are affected.

Total loss of cost monitoring and token usage stats for Azure reasoning models.

Regression of core functionality that was previously stable in mid-March 2026 versions.

Evidence/examples

Working Version: v2026.3.18

Broken Version: v2026.3.23

Error Log:
run error: 400 Item 'rs_...' of type 'reasoning' was provided without its required following item.
agent main | session main | custom-resource/gpt-5.2 | think medium | tokens unknown/400k

Configuration Context:
Following the official guide: https://techcommunity.microsoft.com/blog/educatordeveloperblog/integrating-microsoft-foundry-with-openclaw-step-by-step-model-configuration/4495586

Additional information

When using openai-completions as a workaround, openclaw status reports unknown tokens for active sessions. This suggests that the response parsing logic for usage metadata is also impaired or incompatible with the current implementation of reasoning models in the 3.23 build.

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions