feat(llm): add Reasoning field support for additional provider compatibility#1077
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the LLM transformer by introducing robust support for a new 'Reasoning' field. This change ensures that the system can correctly process and convert reasoning information from various LLM providers, particularly those that utilize a 'Reasoning' field instead of or in addition to 'ReasoningContent'. The modifications span across core message structures and conversion logic, providing a more flexible and compatible integration with diverse LLM services. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for a Reasoning field in the LLM transformer, which is a good enhancement for compatibility with various providers. The changes are mostly well-implemented, with comprehensive unit tests. However, I've identified a critical bug in the outbound conversion logic when handling foreign reasoning signatures, which could lead to incorrect data being sent to the upstream service. The corresponding unit test also asserts this buggy behavior. I've provided suggestions to correct both the implementation and the test.
Refines the reasoning field handling in MessageFromLLM to properly clear both reasoning and reasoningContent fields when a foreign signature is detected, preventing provider-specific data from being sent to OpenAI.
Problem
AxonHub only supports the
reasoning_contentfield, but some providers return areasoningfield instead, causing reasoning content to be lost when proxying through AxonHub.Competing Standards in OpenAI-compatible Ecosystem
There are two conventions for reasoning content:
message.reasoning_content- Used by DeepSeek, OpenAI o1/o3message.reasoning- Used by some providers (e.g., models returning reasoning as a separate field)Example
Direct API call to a provider using
reasoningfield returns:{ "content": "The equation 1+1=2 is true...", "reasoning_content": null, "reasoning": "The user is asking Why does 1+1=2?..." }Through AxonHub (before fix):
{ "content": "The equation 1+1=2 is true...", "reasoning_content": null, "reasoning": null }The reasoning field was being lost because the OpenAI Message struct only defined
ReasoningContent, missingReasoningentirely.Solution
Add native support for the
reasoningfield alongsidereasoning_contentin the OpenAI transformer layer with bidirectional fallback logic:ReasoningContentis nil butReasoninghas value, copyReasoning→ReasoningContentThis ensures compatibility with both conventions without breaking existing DeepSeek functionality.
Changes
Reasoning *stringfield to OpenAI Message struct (model.go)Reasoning *stringfield to unified llm.Message struct (llm/model.go)inbound_convert.go)outbound_convert.go)Testing
reasoningfield (14/14)reasoning_contentfield (14/14)Impact
Fixes reasoning field support for providers using the
reasoningfield convention.