-
-
Notifications
You must be signed in to change notification settings - Fork 69.4k
bug: OpenAI Responses API hardcodes store: false — breaks all multi-turn conversations with openai/* models #16803
Description
Bug Description
OpenClaw hardcodes store: false in the OpenAI Responses API provider, causing all openai/* models to fail on multi-turn conversations with HTTP 404 errors.
Steps to Reproduce
- Configure any
openai/*model (e.g.openai/gpt-5-mini) as an agent's primary model - Send a first message to the agent — succeeds
- Send a second message in the same session — fails with 404
Expected Behavior
Multi-turn conversations work without errors. The previous_response_id chain is valid because reasoning items are persisted server-side.
Actual Behavior
Second (and all subsequent) turns fail with:
HTTP 404: Item with id 'rs_...' not found. Items are not persisted when store is set to false.
Try again with store set to true, or remove this item from your input.
Root Cause
In @mariozechner/pi-ai/dist/providers/openai-responses.js (line ~153), store is hardcoded to false:
const params = {
model: model.id,
input: messages,
stream: true,
store: false, // <-- hardcoded
};When store: false, OpenAI does not persist reasoning items (rs_* IDs) server-side. OpenClaw chains previous_response_id across turns, which references these unpersisted items, causing the 404.
This is not a stale session issue. It reproduces on completely fresh sessions after the very first multi-turn exchange. Session resets do not fix it.
Note: compat.supportsStore already exists in the config schema, but it only controls the Completions API path (openai-completions.js), not the Responses API path.
Proposed Fix
Minimum fix: Change store: false to store: true in openai-responses.js.
Ideal fix: Expose store as a configurable option at the model/provider level (extending compat.supportsStore to also apply to the Responses API path). Default should be true for the Responses API since previous_response_id requires it.
Bonus: store: true also enables OpenAI prompt caching, giving 50% cheaper cached input tokens — so this is both a bug fix and a cost optimization.
Environment
- OpenClaw: 2026.2.9
- Affected models: All
openai/*models (tested withopenai/gpt-5-mini) @mariozechner/pi-ai: version bundled with OpenClaw 2026.2.9- Not affected: Gemini, OpenRouter, and other non-OpenAI providers (they use different API paths)
Related Issues
- OpenAI Responses API: Stale previous_response_id causes 404 'Item not found' errors #12885 (closed — workaround was stripping
rs_*items from replay, but that only addressed the symptom, not the root cause) - Vercel AI SDK #7543 — same upstream issue
- OpenAI Agents Python #2020 — same issue in another framework
Current Workaround
sed -i 's/store: false,/store: true,/' \
/path/to/node_modules/@mariozechner/pi-ai/dist/providers/openai-responses.jsMust be re-applied after every OpenClaw update.