Skip to content

[Bug]: Anthropic adapter drops thinking.summary when translating to OpenAI reasoning_effort #20998

@roni-frantchi

Description

@roni-frantchi

What happened?

When using litellm.aanthropic_messages() with a non-Anthropic model (e.g. openai/gpt-5.1) and passing thinking={"type": "enabled", "budget_tokens": 5000, "summary": "concise"}, the summary field is silently dropped and replaced with a hardcoded "detailed".

Expected: The user's summary: "concise" should be preserved and sent to OpenAI's Responses API.

Actual: summary is dropped by translate_anthropic_thinking_to_reasoning_effort() (returns plain string "medium"), then _route_openai_thinking_to_responses_api_if_needed() hardcodes summary: "detailed".

Steps to Reproduce

from litellm.llms.anthropic.experimental_pass_through.adapters.transformation import LiteLLMAnthropicMessagesAdapter
from litellm.llms.anthropic.experimental_pass_through.adapters.handler import LiteLLMMessagesToCompletionTransformationHandler

adapter = LiteLLMAnthropicMessagesAdapter()

# User passes summary="concise"
thinking = {"type": "enabled", "budget_tokens": 5000, "summary": "concise"}

# Step 1: translate_anthropic_thinking_to_reasoning_effort drops summary
effort = adapter.translate_anthropic_thinking_to_reasoning_effort(thinking)
print(f"reasoning_effort: {effort!r}")  # 'medium' — summary is gone

# Step 2: translate_thinking_for_model wraps it
result = adapter.translate_thinking_for_model(thinking, "openai/gpt-5.1")
print(f"result: {result!r}")  # {'reasoning_effort': 'medium'} — no summary

# Step 3: handler hardcodes summary="detailed"
completion_kwargs = {"model": "openai/gpt-5.1", "reasoning_effort": "medium"}
LiteLLMMessagesToCompletionTransformationHandler._route_openai_thinking_to_responses_api_if_needed(
    completion_kwargs, thinking=thinking
)
print(f"final: {completion_kwargs!r}")
# {'model': 'responses/openai/gpt-5.1', 'reasoning_effort': {'effort': 'medium', 'summary': 'detailed'}}
# Expected summary: "concise", got: "detailed"

Suggested Fix

The fix is small — translate_anthropic_thinking_to_reasoning_effort() should return a dict instead of a string when summary is present in the thinking config:

# In LiteLLMAnthropicMessagesAdapter.translate_anthropic_thinking_to_reasoning_effort():
#
# Instead of always returning a plain string like "medium",
# return {"effort": "medium", "summary": "concise"} when summary is present.
# This way _route_openai_thinking_to_responses_api_if_needed() will see it's
# already a dict with "summary" and won't hardcode "detailed".

summary = thinking.get("summary")
if summary:
    return {"effort": effort_str, "summary": summary}
return effort_str

The handler already handles dicts correctly (lines 76-83 of handler.py) — it only adds "summary": "detailed" when the key is absent.

We'd be happy to open a PR for this if you'd accept one.

Workaround: We currently strip thinking from the kwargs and pass reasoning_effort as a dict directly to bypass the translation.

Relevant log output

Step 1 - reasoning_effort result: 'medium'
  -> summary field is LOST (returned plain string, not dict)
Step 2 - translate_thinking_for_model result: {'reasoning_effort': 'medium'}
  -> summary: "concise" is completely absent
Step 3 - handler hardcodes: {'effort': 'medium', 'summary': 'detailed'}
  -> Expected summary: "concise", got: "detailed"

Component

SDK (litellm Python package)

LiteLLM Version

v1.81.10

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions