fix(responses): emit content_part.added event for non-OpenAI models#24445
Merged
krrish-berri-2 merged 2 commits intoBerriAI:mainfrom Mar 31, 2026
Merged
fix(responses): emit content_part.added event for non-OpenAI models#24445krrish-berri-2 merged 2 commits intoBerriAI:mainfrom
krrish-berri-2 merged 2 commits intoBerriAI:mainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Contributor
Contributor
Greptile SummaryThis PR fixes a spec compliance gap in Changes:
The fix is narrowly scoped to message items: reasoning-first output items still return early without emitting Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/responses/litellm_completion_transformation/streaming_iterator.py | Core fix: emits content_part.added immediately after output_item.added for message items by calling the previously unused create_content_part_added_event() inside _ensure_output_item_for_chunk. Logic is correct; sequence numbers increment properly. The guard if not self.sent_content_part_added_event is technically redundant but harmless (outer sent_output_item_added_event already prevents re-entry). |
| tests/llm_translation/test_anthropic_completion.py | Integration test updated to include CONTENT_PART_ADDED in the expected event sequence. The assertion block for the new event was already present; the diff only adds the event to expected_events, making the test stricter and correctly validating the fix. |
| tests/test_litellm/responses/litellm_completion_transformation/test_litellm_completion_responses.py | New TestEnsureOutputItemContentPartAdded class covers three key cases (message item, reasoning item, idempotency). All tests are pure unit tests with no network calls, using __new__ to bypass __init__ and MagicMock for stream chunks. Missing a test for reasoning-then-text transitions, but that is a pre-existing design gap. |
Sequence Diagram
sequenceDiagram
participant Provider as Non-OpenAI Provider<br/>(Claude / Gemini)
participant Iterator as LiteLLMCompletionStreamingIterator
participant Consumer as Downstream Parser
Note over Iterator,Consumer: Before fix (broken)
Provider->>Iterator: streaming chunk (text delta)
Iterator->>Consumer: response.created
Iterator->>Consumer: response.in_progress
Iterator->>Consumer: response.output_item.added (message)
Iterator->>Consumer: response.output_text.delta
Note over Consumer: ❌ Fails — text part not initialised
Note over Iterator,Consumer: After fix (correct)
Provider->>Iterator: streaming chunk (text delta)
Iterator->>Consumer: response.created
Iterator->>Consumer: response.in_progress
Iterator->>Consumer: response.output_item.added (message)
Iterator->>Consumer: response.content_part.added ← NEW
Iterator->>Consumer: response.output_text.delta
Note over Consumer: ✅ Works — text part already initialised
Iterator->>Consumer: response.output_text.done
Iterator->>Consumer: response.content_part.done
Iterator->>Consumer: response.output_item.done
Iterator->>Consumer: response.completed
Reviews (3): Last reviewed commit: "test(responses): update expected events ..." | Re-trigger Greptile
litellm/responses/litellm_completion_transformation/streaming_iterator.py
Show resolved
Hide resolved
LiteLLMCompletionStreamingIterator defined create_content_part_added_event() but never called it, so non-OpenAI providers (Claude, Gemini, etc.) skipped this spec-required event. Downstream parsers that process content_part.added to initialize the text part structure would fail when output_text.delta arrived before the text part existed.
976aa8c to
a71808c
Compare
…_part.added - Update test_anthropic_via_responses_api expected_events to include CONTENT_PART_ADDED between OUTPUT_ITEM_ADDED and OUTPUT_TEXT_DELTA - Add TestEnsureOutputItemContentPartAdded with 3 mock tests: message item emits content_part.added, reasoning item does not, and the event is only emitted once
a71808c to
1716956
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
LiteLLMCompletionStreamingIterator (used for non-OpenAI providers like Claude, Gemini) defined
create_content_part_added_event()but never called it. This meant the spec-requiredresponse.content_part.addedSSE event was missing from the stream.Downstream parsers process
content_part.addedto initialize the text part structure. Whenoutput_text.deltaarrived before the text part existed, they would fail withtext part <id> not found.Fix
Call
create_content_part_added_event()in_ensure_output_item_for_chunk()immediately after appendingoutput_item.addedfor message items. This matches the event ordering that native Responses API providers (OpenAI) already emit.Event ordering (before → after)
Before (broken):
After (fixed):
Testing
Verified with e2e streaming tests against Claude and Gemini models through the Responses API.