Motivation.
When GPT-OSS was released, OpenAI provided guidance that chain of thought content should be returned in the reasoning field of the response. They recommend this for Chat Completions API even though it does not officially support returning chain of thought in the official API. Since vLLM implements the OpenAI API, it makes sense to conform to their recommendations.
Before #27752, vLLM used reasoning_content as this is what DeepSeek originally used. That PR makes the change from reasoning_content to reasoning, maintaining backwards compatibility.
Proposed Change.
Remove reasoning_content completely to avoid confusion.
Feedback Period.
2 weeks
CC List.
No response
Any Other Things.
No response
Before submitting a new issue...
Motivation.
When GPT-OSS was released, OpenAI provided guidance that chain of thought content should be returned in the
reasoningfield of the response. They recommend this for Chat Completions API even though it does not officially support returning chain of thought in the official API. Since vLLM implements the OpenAI API, it makes sense to conform to their recommendations.Before #27752, vLLM used
reasoning_contentas this is what DeepSeek originally used. That PR makes the change fromreasoning_contenttoreasoning, maintaining backwards compatibility.Proposed Change.
Remove
reasoning_contentcompletely to avoid confusion.Feedback Period.
2 weeks
CC List.
No response
Any Other Things.
No response
Before submitting a new issue...