Skip to content

bug: OpenAI Responses API hardcodes store: false — breaks all multi-turn conversations with openai/* models #16803

@mark9232

Description

@mark9232

Bug Description

OpenClaw hardcodes store: false in the OpenAI Responses API provider, causing all openai/* models to fail on multi-turn conversations with HTTP 404 errors.

Steps to Reproduce

  1. Configure any openai/* model (e.g. openai/gpt-5-mini) as an agent's primary model
  2. Send a first message to the agent — succeeds
  3. Send a second message in the same session — fails with 404

Expected Behavior

Multi-turn conversations work without errors. The previous_response_id chain is valid because reasoning items are persisted server-side.

Actual Behavior

Second (and all subsequent) turns fail with:

HTTP 404: Item with id 'rs_...' not found. Items are not persisted when store is set to false.
Try again with store set to true, or remove this item from your input.

Root Cause

In @mariozechner/pi-ai/dist/providers/openai-responses.js (line ~153), store is hardcoded to false:

const params = {
    model: model.id,
    input: messages,
    stream: true,
    store: false,  // <-- hardcoded
};

When store: false, OpenAI does not persist reasoning items (rs_* IDs) server-side. OpenClaw chains previous_response_id across turns, which references these unpersisted items, causing the 404.

This is not a stale session issue. It reproduces on completely fresh sessions after the very first multi-turn exchange. Session resets do not fix it.

Note: compat.supportsStore already exists in the config schema, but it only controls the Completions API path (openai-completions.js), not the Responses API path.

Proposed Fix

Minimum fix: Change store: false to store: true in openai-responses.js.

Ideal fix: Expose store as a configurable option at the model/provider level (extending compat.supportsStore to also apply to the Responses API path). Default should be true for the Responses API since previous_response_id requires it.

Bonus: store: true also enables OpenAI prompt caching, giving 50% cheaper cached input tokens — so this is both a bug fix and a cost optimization.

Environment

  • OpenClaw: 2026.2.9
  • Affected models: All openai/* models (tested with openai/gpt-5-mini)
  • @mariozechner/pi-ai: version bundled with OpenClaw 2026.2.9
  • Not affected: Gemini, OpenRouter, and other non-OpenAI providers (they use different API paths)

Related Issues

Current Workaround

sed -i 's/store: false,/store: true,/' \
  /path/to/node_modules/@mariozechner/pi-ai/dist/providers/openai-responses.js

Must be re-applied after every OpenClaw update.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions