Skip to content

feat: add OpenAI Conversations API state support via OpenAIResponsesModelSettings.openai_conversation_id#5224

Merged
DouweM merged 11 commits intopydantic:mainfrom
corytomlinson:plan-openai-conversations
May 5, 2026
Merged

feat: add OpenAI Conversations API state support via OpenAIResponsesModelSettings.openai_conversation_id#5224
DouweM merged 11 commits intopydantic:mainfrom
corytomlinson:plan-openai-conversations

Conversation

@corytomlinson
Copy link
Copy Markdown
Contributor

@corytomlinson corytomlinson commented Apr 28, 2026

Closes #5222.

Summary

  • Adds OpenAIResponsesModelSettings.openai_conversation_id with concrete conversation IDs and 'auto'.
  • Sends the Responses API conversation parameter, stores returned conversation IDs in ModelResponse.provider_details['conversation_id'], and trims already-stored history for matching same-provider conversations.
  • Rejects simultaneous openai_previous_response_id and openai_conversation_id, covers streaming metadata, and documents durable Conversations API usage.

Update after #5251

After #5251 added generic Pydantic AI conversation_id support, this PR keeps the OpenAI Conversations API ID provider-specific in ModelResponse.provider_details['conversation_id'].

openai_conversation_id='auto' now scopes reuse to the active Pydantic AI conversation when message-level conversation_id values are available. This prevents conversation_id='new' from accidentally continuing the previous OpenAI server-side conversation, while explicit openai_conversation_id='<id>' still allows deliberate reuse.

Test plan

  • uv run pytest tests/models/test_openai_responses.py -k openai_conversation_id --record-mode=none
  • uv run pytest tests/test_examples.py -k 'docs/models/openai.md'
  • uv run coverage run -m pytest tests/models/test_openai_responses.py -k openai_conversation_id --record-mode=none
  • uv run ruff check pydantic_ai_slim/pydantic_ai/models/openai.py tests/models/test_openai_responses.py
  • uv run ruff format --check pydantic_ai_slim/pydantic_ai/models/openai.py tests/models/test_openai_responses.py
  • Pre-commit hooks run during commit

Checklist

  • Any AI generated code has been reviewed line-by-line by the human PR author, who stands by it.
  • No breaking changes in accordance with the version policy.
  • PR title is fit for the release changelog.

@github-actions github-actions Bot added size: S Small PR (≤100 weighted lines) docs Improvements or additions to documentation labels Apr 28, 2026
@DouweM
Copy link
Copy Markdown
Collaborator

DouweM commented Apr 28, 2026

@corytomlinson Thanks, plan looks good! Can you please generate code from it + verify that the implementation works for you (including auto etc) + push it in? Then we can likely take it from there :)

@github-actions github-actions Bot added size: M Medium PR (101-500 weighted lines) and removed size: S Small PR (≤100 weighted lines) labels Apr 29, 2026
@corytomlinson corytomlinson changed the title plan: add OpenAI Conversations API state support feat: add OpenAI Conversations API state support Apr 29, 2026
@DouweM
Copy link
Copy Markdown
Collaborator

DouweM commented Apr 29, 2026

@corytomlinson Thanks Cory! At the same time, I'm working on #5251. Any opinion on how they should interact? Should the OpenAI conversation_id become the Model{Request,Response}.conversation_id as well, or would it be OK if Pydantic AI had its own internal conversation IDs (with OpenAI's available on ModelResponse.provider_metadata)

@corytomlinson
Copy link
Copy Markdown
Contributor Author

Good question. My instinct is to keep them separate by default.

I see Model{Request,Response}.conversation_id from #5251 as the Pydantic AI/application conversation identifier: useful for tracing, UI thread correlation, mixed-provider histories, and message-history round trips. The OpenAI Conversations ID is a provider-side state handle that changes how the Responses API resolves context, can be absent/deleted, and only has meaning for OpenAI.

So I would not automatically stamp OpenAI's conversation_id into the generic conversation_id field. I'd keep OpenAI's ID in provider-specific metadata/details and keep openai_conversation_id as the setting that controls OpenAI server-side state. If a user really wants the generic Pydantic AI conversation ID to match an OpenAI conversation ID, they can do that explicitly.

One thing that may be worth adding once #5251 is in: openai_conversation_id='auto' should probably prefer an OpenAI conversation ID from messages with the same Pydantic AI conversation_id, when available, so conversation_id='new' does not accidentally continue the old OpenAI server-side conversation. Explicit openai_conversation_id=<id> should still allow deliberate reuse.

@corytomlinson
Copy link
Copy Markdown
Contributor Author

corytomlinson commented May 1, 2026

Following up on the #5251 question: OpenAI conversation IDs remain provider-specific, and openai_conversation_id='auto' now only reuses one from the active Pydantic AI conversation_id when that metadata is available. Explicit OpenAI conversation IDs still work for deliberate reuse.

@corytomlinson
Copy link
Copy Markdown
Contributor Author

corytomlinson commented May 4, 2026

Adding some evidence gathered during user testing. Below is a Logfire trace showing 2-turns where the first turn makes the request to create the durable conversation object in the conversations API. Followed by a second turn which includes a check to make sure the conversation object created previously, exists...

image

And here are the corresponding logs from OpenAI which validate the object and all referenced responses API calls...

image

There are no signs of any duplicate context in either Logfire or OpenAI which confirms intended behavior:

  • Pass openai_conversation_id='conv_...'.
  • message_history is scanned backward for the latest OpenAI ModelResponse whose provider_details['conversation_id'] matches.
  • If found, only messages after that response plus the current user message are sent.
  • If not found, the full message_history is sent.

Not directly in scope for this feature, but I have also validated that server side compaction works when sending openai_context_management with 'type': 'compaction' and compact threshold set. The encrypted compaction parts are included in the corresponding messages...

image

Going to move this PR to ready for review. @DouweM please let me know if further work is necessary.

@corytomlinson corytomlinson marked this pull request as ready for review May 4, 2026 23:41
Copilot AI review requested due to automatic review settings May 4, 2026 23:41
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds first-class support for OpenAI Responses API durable conversation state (conversation) in OpenAIResponsesModel, including automatic reuse and safe history trimming, plus tests/cassettes and documentation.

Changes:

  • Introduces OpenAIResponsesModelSettings.openai_conversation_id ('auto' or concrete conv_...) and enforces mutual exclusivity with openai_previous_response_id.
  • Sends conversation on /responses requests, persists returned IDs in ModelResponse.provider_details['conversation_id'], and trims already-stored history for matching OpenAI conversations (including streaming).
  • Adds integration tests + VCR cassettes, updates cassette header filtering, and documents durable conversation usage.

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
tests/models/test_openai_responses.py Adds conversation-state integration tests and a helper to create/delete conversations.
tests/models/cassettes/test_openai_responses/test_openai_conversation_id_tool_call_continuation.yaml Records tool-call continuation behavior when using conversation.
tests/models/cassettes/test_openai_responses/test_openai_conversation_id_streaming_provider_details.yaml Records streaming events including conversation metadata.
tests/models/cassettes/test_openai_responses/test_openai_conversation_id_preserves_mismatched_history.yaml Records behavior ensuring mismatched history isn’t incorrectly trimmed.
tests/models/cassettes/test_openai_responses/test_openai_conversation_id_explicit_and_auto.yaml Records explicit conversation ID + 'auto' reuse behavior.
tests/models/cassettes/test_openai_responses/test_openai_conversation_id_auto_without_history.yaml Records 'auto' behavior when no prior conversation exists.
tests/models/cassettes/test_openai_responses/test_openai_conversation_id_auto_respects_pydantic_ai_conversation_id.yaml Records 'auto' scoping to the active Pydantic AI conversation_id.
tests/json_body_serializer.py Filters additional OpenAI-related headers from recorded cassettes.
pydantic_ai_slim/pydantic_ai/models/openai.py Implements openai_conversation_id, request mapping, response metadata capture, and history trimming logic.
docs/models/openai.md Documents durable conversations and updates compaction compatibility notes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +109 to +116
@asynccontextmanager
async def _openai_conversation(openai_api_key: str) -> AsyncIterator[tuple[AsyncOpenAI, str]]:
async with AsyncOpenAI(api_key=openai_api_key) as async_client:
conversation = await async_client.conversations.create()
try:
yield async_client, conversation.id
finally:
await async_client.conversations.delete(conversation.id)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Fixed in 55147b3 by quoting the AsyncOpenAI return annotation so the module can still import when the optional OpenAI dependency is unavailable. I also verified this with an import check that blocks openai, plus the focused openai_conversation_id cassette tests.

@DouweM DouweM changed the title feat: add OpenAI Conversations API state support feat: add OpenAI Conversations API state support via OpenAIResponsesModelSettings.openai_conversation_id May 5, 2026
@DouweM DouweM merged commit 671e305 into pydantic:main May 5, 2026
44 checks passed
@DouweM
Copy link
Copy Markdown
Collaborator

DouweM commented May 5, 2026

@corytomlinson Thank you Cory!

@DouweM DouweM added feature New feature request, or PR implementing a feature (enhancement) and removed docs Improvements or additions to documentation labels May 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature New feature request, or PR implementing a feature (enhancement) size: M Medium PR (101-500 weighted lines)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add OpenAI Conversations API State Support to OpenAIResponsesModel

3 participants