Conversation
…rations across multiple files
WalkthroughThe configuration files for GPT-5 model variants were updated to include a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LLMProvider (litellm.py)
participant Router
participant Model
Client->>LLMProvider: Request completion (specifying model)
LLMProvider->>LLMProvider: Check if model starts with "gpt-5"
alt Model is GPT-5
LLMProvider->>Router: acompletion(..., allowed_openai_params=["reasoning_effort"])
else Other models
LLMProvider->>Router: acompletion(..., allowed_openai_params=None)
end
Router->>Model: Forward request with allowed params
Model-->>Router: Return completion
Router-->>LLMProvider: Return response
LLMProvider-->>Client: Return completion
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 5
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
deployment/kustomizations/base/cm.yaml(1 hunks)docker/config.example.yaml(1 hunks)wren-ai-service/src/providers/llm/litellm.py(3 hunks)wren-ai-service/tools/config/config.example.yaml(1 hunks)wren-ai-service/tools/config/config.full.yaml(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-06-20T02:37:21.292Z
Learnt from: cyyeh
PR: Canner/WrenAI#1763
File: wren-ai-service/src/pipelines/generation/semantics_description.py:31-33
Timestamp: 2025-06-20T02:37:21.292Z
Learning: In the wren-ai-service codebase, when adding new fields like "alias" to the output of functions that use Pydantic models for validation, the user prefers not to update the corresponding Pydantic model definitions to include these new fields.
Applied to files:
wren-ai-service/tools/config/config.full.yamlwren-ai-service/tools/config/config.example.yaml
📚 Learning: 2025-01-27T08:55:44.919Z
Learnt from: cyyeh
PR: Canner/WrenAI#1236
File: wren-ai-service/src/providers/llm/litellm.py:49-49
Timestamp: 2025-01-27T08:55:44.919Z
Learning: In the LitellmLLMProvider class, model parameters (`self._model_kwargs`) are intentionally designed to take precedence over runtime parameters (`generation_kwargs`), which differs from other LLM providers. This is the intended behavior specific to this provider.
Applied to files:
wren-ai-service/src/providers/llm/litellm.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
🔇 Additional comments (1)
docker/config.example.yaml (1)
33-33: Docker config parity confirmed:reasoning_effortadded for all GPT-5 models
- docker/config.example.yaml: lines 33, 40, 47
- service config (config.yaml): lines 84, 91, 98
- cm config (cm.yaml): lines 27, 33, 40, 47
All GPT-5 entries now include
reasoning_effort: minimal. No further changes needed—proceed with the merge.
| max_completion_tokens: 4096 | ||
| n: 1 | ||
| seed: 0 | ||
| reasoning_effort: minimal |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
ConfigMap model blocks: GPT-5 entries updated correctly.
Change is consistent with other environments.
Add a one-liner comment above the GPT-5 blocks to indicate reasoning_effort is GPT-5–specific.
Also applies to: 91-91, 98-98
🏁 Script executed:
#!/bin/bash
# Sanity check: reason param presence only on GPT-5 blocks inside this CM
rg -n 'gpt-5' -n -A10 deployment/kustomizations/base/cm.yaml | rg -n 'reasoning_effort'
rg -n 'gpt-4\.1' -n -A10 deployment/kustomizations/base/cm.yaml | rg -n 'reasoning_effort' || echo "No gpt-4.1 blocks with reasoning_effort found."Length of output: 398
Add GPT-5-specific comment for reasoning_effort
Verified that reasoning_effort appears only in the gpt-5 blocks (lines 84, 91, 98) and not in any gpt-4.1 sections. Please prepend each gpt-5 block with a one-line comment noting that reasoning_effort is GPT-5–specific:
• File: deployment/kustomizations/base/cm.yaml
– Above line 84
– Above line 91
– Above line 98
Example diff:
--- a/deployment/kustomizations/base/cm.yaml
+++ b/deployment/kustomizations/base/cm.yaml
@@ -83,0 +83,1
+# reasoning_effort applies only to GPT-5 models
@@ -90,0 +90,1
+# reasoning_effort applies only to GPT-5 models
@@ -97,0 +97,1
+# reasoning_effort applies only to GPT-5 models📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| reasoning_effort: minimal | |
| --- a/deployment/kustomizations/base/cm.yaml | |
| +++ b/deployment/kustomizations/base/cm.yaml | |
| @@ -83,0 +83,1 | |
| +# reasoning_effort applies only to GPT-5 models | |
| @@ -90,0 +90,1 | |
| +# reasoning_effort applies only to GPT-5 models | |
| @@ -97,0 +97,1 | |
| +# reasoning_effort applies only to GPT-5 models |
🤖 Prompt for AI Agents
In deployment/kustomizations/base/cm.yaml at lines 84, 91, and 98, add a
one-line comment above each line where reasoning_effort appears to indicate that
this field is specific to GPT-5. Insert a comment line immediately before each
of these lines stating that reasoning_effort is a GPT-5-specific setting.
| from litellm import acompletion | ||
| from litellm.router import Router |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Make Router import backward compatible across litellm versions.
Older litellm exposed Router at litellm.Router. Use a dual-path import to avoid runtime import errors on environments not yet upgraded.
Apply:
-from litellm import acompletion
-from litellm.router import Router
+from litellm import acompletion
+try:
+ from litellm.router import Router
+except ImportError:
+ # Fallback for older litellm versions
+ from litellm import Router📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from litellm import acompletion | |
| from litellm.router import Router | |
| from litellm import acompletion | |
| try: | |
| from litellm.router import Router | |
| except ImportError: | |
| # Fallback for older litellm versions | |
| from litellm import Router |
🤖 Prompt for AI Agents
In wren-ai-service/src/providers/llm/litellm.py around lines 10 to 11, the
import of Router should be made backward compatible with older litellm versions
where Router is exposed as litellm.Router. Modify the import to first try
importing Router from litellm.router, and if that fails, import Router from
litellm directly, using a try-except block to handle the dual-path import and
avoid runtime import errors.
| allowed_params = ( | ||
| ["reasoning_effort"] if self._model.startswith("gpt-5") else None | ||
| ) | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Whitelist reasoning_effort carefully; confirm litellm allowlist semantics and router timeout.
- Good gating of
reasoning_effortto GPT-5. To reduce risk, passallowed_openai_paramsonly when the key is actually present ingeneration_kwargs(and for GPT-5). Also, consider passingtimeoutto the router call for parity with direct calls (if supported).
- allowed_params = (
- ["reasoning_effort"] if self._model.startswith("gpt-5") else None
- )
+ has_reasoning = "reasoning_effort" in generation_kwargs
+ allowed_params = (
+ ["reasoning_effort"]
+ if self._model.startswith("gpt-5") and has_reasoning
+ else None
+ )
@@
- completion = await self._router.acompletion(
+ completion = await self._router.acompletion(
model=self._model,
messages=openai_formatted_messages,
stream=streaming_callback is not None,
allowed_openai_params=allowed_params,
+ # align with direct call if supported by litellm Router
+ # timeout=self._timeout,
mock_testing_fallbacks=self._enable_fallback_testing,
**generation_kwargs,
)Please confirm two points to ensure we won’t drop other kwargs inadvertently:
And check whether the router supports a timeout kwarg:
Additionally, confirm the repository’s litellm version to decide on import compatibility and supported kwargs:
Also applies to: 112-112, 125-125
🏁 Script executed:
#!/bin/bash
# Find litellm version constraints
rg -n 'litellm' -g '!node_modules' -S
fd -a 'requirements*.txt|pyproject.toml|poetry.lock|Pipfile*' | xargs -I{} rg -n --no-heading 'litellm' {}Length of output: 65656
🌐 Web query:
In litellm, what is the exact behavior of the `allowed_openai_params` argument for acompletion/router? Does it strictly allowlist only those keys or augment the default set of supported OpenAI params?
💡 Result:
In LiteLLM, the allowed_openai_params argument for a completion or router function specifies which OpenAI parameters are permitted in the request. By default, LiteLLM restricts the use of certain OpenAI parameters to ensure compatibility with the selected model and provider. If you encounter a litellm.UnsupportedParamsError due to the inclusion of a parameter not supported by the current model or provider, you can use allowed_openai_params to explicitly allow that parameter. This approach does not augment the default set of supported OpenAI parameters; instead, it serves as an allowlist, permitting specific parameters that would otherwise be restricted.
For example, if you need to use the tools parameter with a model that doesn't support it by default, you can include it in the allowed_openai_params list:
await litellm.acompletion(
model="azure/o_series/<my-deployment-name>",
api_key="xxxxx",
api_base=api_base,
messages=[{"role": "user", "content": "Hello! return a json object"}],
tools=[{"type": "function", "function": {"name": "get_current_time", "description": "Get the current time in a given location.", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city name, e.g. San Francisco"}}, "required": ["location"]}}}],
allowed_openai_params=["tools"],
)In this example, specifying allowed_openai_params=["tools"] tells LiteLLM to permit the tools parameter in the request, even if it's not part of the default supported parameters for the chosen model and provider. (docs.litellm.ai)
Therefore, the allowed_openai_params argument functions as an explicit allowlist, enabling the use of specific OpenAI parameters that are not included in the default set of supported parameters for the selected model and provider.
Citations:
🌐 Web query:
Does litellm Router.acompletion accept a `timeout` parameter similar to `litellm.acompletion`?
💡 Result:
Yes, the Router.acompletion method in LiteLLM accepts a timeout parameter, similar to litellm.acompletion. This parameter allows you to specify the maximum duration (in seconds) that the completion request should wait for a response before timing out.
You can set the timeout parameter in two ways:
-
Global Timeout for All Calls: When initializing the
Router, you can set a global timeout that applies to all completion requests made through that router instance.from litellm import Router model_list = [{...}] router = Router(model_list=model_list, timeout=30) # Timeout set to 30 seconds
In this example, any completion request made through
routerwill have a timeout of 30 seconds. (docs.litellm.ai) -
Timeout for Individual Models: You can specify a timeout for each model by including the
timeoutparameter within thelitellm_paramsfor that model.from litellm import Router import asyncio model_list = [{ "model_name": "gpt-3.5-turbo", "litellm_params": { "model": "azure/chatgpt-v-2", "api_key": os.getenv("AZURE_API_KEY"), "api_version": os.getenv("AZURE_API_VERSION"), "api_base": os.getenv("AZURE_API_BASE"), "timeout": 300 # Timeout set to 300 seconds (5 minutes) } }] router = Router(model_list=model_list, routing_strategy="least-busy") async def router_acompletion(): response = await router.acompletion( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hey, how's it going?"}] ) print(response) return response asyncio.run(router_acompletion())
In this setup, the
gpt-3.5-turbomodel has a timeout of 300 seconds (5 minutes). (docs.litellm.ai)
By configuring the timeout parameter, you can control how long the system waits for a response before considering the request as failed due to a timeout.
Citations:
- 1: https://docs.litellm.ai/docs/proxy/timeout?utm_source=openai
- 2: https://docs.litellm.ai/docs/proxy/timeout?utm_source=openai
Enforce reasoning_effort gating and enable router timeout
LiteLLM’s allowed_openai_params is a strict allowlist (it doesn’t augment the default set), so we must only pass ["reasoning_effort"] when that key is actually in generation_kwargs. Also, since Router.acompletion supports a timeout kwarg, forward self._timeout to keep parity with direct calls.
Please update wren-ai-service/src/providers/llm/litellm.py around lines 103–106 and the router call:
• Add a presence check for "reasoning_effort" before constructing allowed_openai_params.
• Pass timeout=self._timeout to self._router.acompletion.
Example diff:
@@ lines 101–108
- allowed_params = (
- ["reasoning_effort"] if self._model.startswith("gpt-5") else None
- )
+ has_reasoning = "reasoning_effort" in generation_kwargs
+ allowed_params = (
+ ["reasoning_effort"]
+ if self._model.startswith("gpt-5") and has_reasoning
+ else None
+ )
@@ lines 111–116
- completion = await self._router.acompletion(
+ completion = await self._router.acompletion(
model=self._model,
messages=openai_formatted_messages,
stream=streaming_callback is not None,
- allowed_openai_params=allowed_params,
+ allowed_openai_params=allowed_params,
+ timeout=self._timeout,
mock_testing_fallbacks=self._enable_fallback_testing,
**generation_kwargs,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| allowed_params = ( | |
| ["reasoning_effort"] if self._model.startswith("gpt-5") else None | |
| ) | |
| # Only allow reasoning_effort if the model is GPT-5 and the caller actually provided it | |
| has_reasoning = "reasoning_effort" in generation_kwargs | |
| allowed_params = ( | |
| ["reasoning_effort"] | |
| if self._model.startswith("gpt-5") and has_reasoning | |
| else None | |
| ) | |
| completion = await self._router.acompletion( | |
| model=self._model, | |
| messages=openai_formatted_messages, | |
| stream=streaming_callback is not None, | |
| allowed_openai_params=allowed_params, | |
| timeout=self._timeout, | |
| mock_testing_fallbacks=self._enable_fallback_testing, | |
| **generation_kwargs, | |
| ) |
🤖 Prompt for AI Agents
In wren-ai-service/src/providers/llm/litellm.py around lines 103 to 106, update
the allowed_params assignment to include "reasoning_effort" only if it is
present in generation_kwargs, enforcing the gating. Additionally, when calling
self._router.acompletion, pass timeout=self._timeout to maintain parity with
direct calls. This ensures allowed_openai_params is strictly filtered and the
router respects the timeout setting.
| max_completion_tokens: 4096 | ||
| n: 1 | ||
| seed: 0 | ||
| reasoning_effort: minimal |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Reasoning effort for GPT-5: looks good; verify accepted values and scope.
- LGTM adding
reasoning_effort: minimalonly to GPT-5 variants; 4.1 models remain unchanged. - Please confirm the accepted enum for this field (e.g., minimal/medium/maximal) and that litellm forwards it only for GPT-5 as intended.
Consider adding an inline comment documenting allowed values and that this is GPT-5–specific.
Run to double-check repository-wide consistency:
Also applies to: 40-40, 47-47
🏁 Script executed:
#!/bin/bash
# 1) Ensure we don't set reasoning_effort for non-GPT-5 models
rg -n 'reasoning_effort' -A2 -B4
# 2) Verify all GPT-5 model blocks carry the field
rg -n $'model: gpt-5' -A10 | rg -n 'reasoning_effort' -nLength of output: 7067
Document accepted reasoning_effort enums and GPT-5 scope
Verified that litellm.py only forwards reasoning_effort when self._model.startswith("gpt-5") (line 104), so the gating logic is correct. To complete this change:
- Confirm that the only valid enum values are
minimal,medium, andmaximal. - Add an inline comment above each
reasoning_effort: minimalentry in the following files, noting that it applies only to GPT-5 models and listing the allowed values:
• wren-ai-service/tools/config/config.full.yaml (lines 29–33, 36–40, 43–47)
• wren-ai-service/tools/config/config.example.yaml (lines 29–33, 36–40, 43–47)
• docker/config.example.yaml (lines 29–33, 36–40, 43–47)
• deployment/kustomizations/base/cm.yaml (lines 80–84, 85–91, 92–98)
🤖 Prompt for AI Agents
In wren-ai-service/tools/config/config.example.yaml around line 33, add an
inline comment above the reasoning_effort: minimal entry stating that this
setting applies only to GPT-5 models and that the allowed enum values are
minimal, medium, and maximal. Repeat this for the specified line ranges in the
other listed files, ensuring the comment clearly documents the gating logic and
valid enums for reasoning_effort.
| max_completion_tokens: 4096 | ||
| n: 1 | ||
| seed: 0 | ||
| reasoning_effort: minimal |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Full config updated correctly for GPT-5; verify no leakage to other models.
Looks correct and consistent with other config files.
Add a short comment near the first GPT-5 model block noting this param is GPT-5–specific.
Also applies to: 40-40, 47-47
🏁 Script executed:
#!/bin/bash
# Assert no gpt-4.* blocks contain reasoning_effort
rg -n 'model:\s*gpt-4' -A8 | rg -n 'reasoning_effort' || echo "No gpt-4 blocks with reasoning_effort found."Length of output: 152
Add GPT-5–specific comment for reasoning_effort
No leakage into GPT-4 blocks was detected. Please annotate the GPT-5 sections to clarify that reasoning_effort: minimal is GPT-5–only:
• wren-ai-service/tools/config/config.full.yaml
– Line 33 (first GPT-5 model block): add a comment, e.g.:
yaml # This parameter applies only to GPT-5 reasoning_effort: minimal
– Repeat the same comment near the GPT-5 blocks at lines 40 and 47.
🤖 Prompt for AI Agents
In wren-ai-service/tools/config/config.full.yaml at lines 33, 40, and 47, add a
comment above the reasoning_effort: minimal setting to indicate that this
parameter applies only to GPT-5. Insert a line like "# This parameter applies
only to GPT-5" immediately before each reasoning_effort: minimal entry in these
GPT-5 model blocks to clarify its scope.
Summary by CodeRabbit
New Features
Chores