Skip to content

feat(config): add 'reasoning_effort' parameter to GPT-5 model configurations across multiple files (ai-env-changed)#1880

Merged
cyyeh merged 1 commit intomainfrom
chore/wren-ai-service/add-reasoing-effort-levels
Aug 8, 2025
Merged

feat(config): add 'reasoning_effort' parameter to GPT-5 model configurations across multiple files (ai-env-changed)#1880
cyyeh merged 1 commit intomainfrom
chore/wren-ai-service/add-reasoing-effort-levels

Conversation

@yichieh-lu
Copy link
Copy Markdown
Collaborator

@yichieh-lu yichieh-lu commented Aug 8, 2025

Summary by CodeRabbit

  • New Features

    • Introduced a new configuration option, allowing the selection of minimal reasoning effort for GPT-5 models.
  • Chores

    • Updated configuration files to support the new reasoning effort setting for relevant models.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Aug 8, 2025

Walkthrough

The configuration files for GPT-5 model variants were updated to include a new reasoning_effort: minimal parameter under their kwargs sections. Additionally, the service logic was modified to selectively pass this parameter to model completion calls for GPT-5 models by updating the allowed OpenAI parameters in the provider code.

Changes

Cohort / File(s) Change Summary
Model Configurations Update
deployment/kustomizations/base/cm.yaml, docker/config.example.yaml, wren-ai-service/tools/config/config.example.yaml, wren-ai-service/tools/config/config.full.yaml
Added reasoning_effort: minimal under kwargs for all GPT-5 model variants. No other configuration changes.
Provider Logic Update
wren-ai-service/src/providers/llm/litellm.py
Modified to detect GPT-5 models and restrict allowed OpenAI parameters to include reasoning_effort when invoking completion calls. Updated import path for Router.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant LLMProvider (litellm.py)
    participant Router
    participant Model

    Client->>LLMProvider: Request completion (specifying model)
    LLMProvider->>LLMProvider: Check if model starts with "gpt-5"
    alt Model is GPT-5
        LLMProvider->>Router: acompletion(..., allowed_openai_params=["reasoning_effort"])
    else Other models
        LLMProvider->>Router: acompletion(..., allowed_openai_params=None)
    end
    Router->>Model: Forward request with allowed params
    Model-->>Router: Return completion
    Router-->>LLMProvider: Return response
    LLMProvider-->>Client: Return completion
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested labels

launcher, wren-ai-service, deployment, ai-env-changed

Suggested reviewers

  • cyyeh

Poem

A rabbit hopped through YAML fields,
Tweaking models for clever yields.
"Reasoning effort: minimal," it wrote,
In configs and code, a careful note.
Now GPT-5 thinks with lighter load—
Hooray for changes down the road!
🐇✨

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chore/wren-ai-service/add-reasoing-effort-levels

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot changed the title feat(config): add 'reasoning_effort' parameter to GPT-5 model configurations across multiple files feat(config): add 'reasoning_effort' parameter to GPT-5 model configurations across multiple files (ai-env-changed) Aug 8, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f3fd39f and cd90c30.

📒 Files selected for processing (5)
  • deployment/kustomizations/base/cm.yaml (1 hunks)
  • docker/config.example.yaml (1 hunks)
  • wren-ai-service/src/providers/llm/litellm.py (3 hunks)
  • wren-ai-service/tools/config/config.example.yaml (1 hunks)
  • wren-ai-service/tools/config/config.full.yaml (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-06-20T02:37:21.292Z
Learnt from: cyyeh
PR: Canner/WrenAI#1763
File: wren-ai-service/src/pipelines/generation/semantics_description.py:31-33
Timestamp: 2025-06-20T02:37:21.292Z
Learning: In the wren-ai-service codebase, when adding new fields like "alias" to the output of functions that use Pydantic models for validation, the user prefers not to update the corresponding Pydantic model definitions to include these new fields.

Applied to files:

  • wren-ai-service/tools/config/config.full.yaml
  • wren-ai-service/tools/config/config.example.yaml
📚 Learning: 2025-01-27T08:55:44.919Z
Learnt from: cyyeh
PR: Canner/WrenAI#1236
File: wren-ai-service/src/providers/llm/litellm.py:49-49
Timestamp: 2025-01-27T08:55:44.919Z
Learning: In the LitellmLLMProvider class, model parameters (`self._model_kwargs`) are intentionally designed to take precedence over runtime parameters (`generation_kwargs`), which differs from other LLM providers. This is the intended behavior specific to this provider.

Applied to files:

  • wren-ai-service/src/providers/llm/litellm.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (1)
docker/config.example.yaml (1)

33-33: Docker config parity confirmed: reasoning_effort added for all GPT-5 models

  • docker/config.example.yaml: lines 33, 40, 47
  • service config (config.yaml): lines 84, 91, 98
  • cm config (cm.yaml): lines 27, 33, 40, 47

All GPT-5 entries now include reasoning_effort: minimal. No further changes needed—proceed with the merge.

max_completion_tokens: 4096
n: 1
seed: 0
reasoning_effort: minimal
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

ConfigMap model blocks: GPT-5 entries updated correctly.

Change is consistent with other environments.

Add a one-liner comment above the GPT-5 blocks to indicate reasoning_effort is GPT-5–specific.

Also applies to: 91-91, 98-98


🏁 Script executed:

#!/bin/bash
# Sanity check: reason param presence only on GPT-5 blocks inside this CM
rg -n 'gpt-5' -n -A10 deployment/kustomizations/base/cm.yaml | rg -n 'reasoning_effort'
rg -n 'gpt-4\.1' -n -A10 deployment/kustomizations/base/cm.yaml | rg -n 'reasoning_effort' || echo "No gpt-4.1 blocks with reasoning_effort found."

Length of output: 398


Add GPT-5-specific comment for reasoning_effort

Verified that reasoning_effort appears only in the gpt-5 blocks (lines 84, 91, 98) and not in any gpt-4.1 sections. Please prepend each gpt-5 block with a one-line comment noting that reasoning_effort is GPT-5–specific:

• File: deployment/kustomizations/base/cm.yaml
– Above line 84
– Above line 91
– Above line 98

Example diff:

--- a/deployment/kustomizations/base/cm.yaml
+++ b/deployment/kustomizations/base/cm.yaml
@@ -83,0 +83,1
+# reasoning_effort applies only to GPT-5 models
@@ -90,0 +90,1
+# reasoning_effort applies only to GPT-5 models
@@ -97,0 +97,1
+# reasoning_effort applies only to GPT-5 models
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
reasoning_effort: minimal
--- a/deployment/kustomizations/base/cm.yaml
+++ b/deployment/kustomizations/base/cm.yaml
@@ -83,0 +83,1
+# reasoning_effort applies only to GPT-5 models
@@ -90,0 +90,1
+# reasoning_effort applies only to GPT-5 models
@@ -97,0 +97,1
+# reasoning_effort applies only to GPT-5 models
🤖 Prompt for AI Agents
In deployment/kustomizations/base/cm.yaml at lines 84, 91, and 98, add a
one-line comment above each line where reasoning_effort appears to indicate that
this field is specific to GPT-5. Insert a comment line immediately before each
of these lines stating that reasoning_effort is a GPT-5-specific setting.

Comment on lines +10 to +11
from litellm import acompletion
from litellm.router import Router
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make Router import backward compatible across litellm versions.

Older litellm exposed Router at litellm.Router. Use a dual-path import to avoid runtime import errors on environments not yet upgraded.

Apply:

-from litellm import acompletion
-from litellm.router import Router
+from litellm import acompletion
+try:
+    from litellm.router import Router
+except ImportError:
+    # Fallback for older litellm versions
+    from litellm import Router
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from litellm import acompletion
from litellm.router import Router
from litellm import acompletion
try:
from litellm.router import Router
except ImportError:
# Fallback for older litellm versions
from litellm import Router
🤖 Prompt for AI Agents
In wren-ai-service/src/providers/llm/litellm.py around lines 10 to 11, the
import of Router should be made backward compatible with older litellm versions
where Router is exposed as litellm.Router. Modify the import to first try
importing Router from litellm.router, and if that fails, import Router from
litellm directly, using a try-except block to handle the dual-path import and
avoid runtime import errors.

Comment on lines +103 to +106
allowed_params = (
["reasoning_effort"] if self._model.startswith("gpt-5") else None
)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Whitelist reasoning_effort carefully; confirm litellm allowlist semantics and router timeout.

  • Good gating of reasoning_effort to GPT-5. To reduce risk, pass allowed_openai_params only when the key is actually present in generation_kwargs (and for GPT-5). Also, consider passing timeout to the router call for parity with direct calls (if supported).
-            allowed_params = (
-                ["reasoning_effort"] if self._model.startswith("gpt-5") else None
-            )
+            has_reasoning = "reasoning_effort" in generation_kwargs
+            allowed_params = (
+                ["reasoning_effort"]
+                if self._model.startswith("gpt-5") and has_reasoning
+                else None
+            )
@@
-                completion = await self._router.acompletion(
+                completion = await self._router.acompletion(
                     model=self._model,
                     messages=openai_formatted_messages,
                     stream=streaming_callback is not None,
                     allowed_openai_params=allowed_params,
+                    # align with direct call if supported by litellm Router
+                    # timeout=self._timeout,
                     mock_testing_fallbacks=self._enable_fallback_testing,
                     **generation_kwargs,
                 )

Please confirm two points to ensure we won’t drop other kwargs inadvertently:

And check whether the router supports a timeout kwarg:

Additionally, confirm the repository’s litellm version to decide on import compatibility and supported kwargs:

Also applies to: 112-112, 125-125


🏁 Script executed:

#!/bin/bash
# Find litellm version constraints
rg -n 'litellm' -g '!node_modules' -S
fd -a 'requirements*.txt|pyproject.toml|poetry.lock|Pipfile*' | xargs -I{} rg -n --no-heading 'litellm' {}

Length of output: 65656


🌐 Web query:

In litellm, what is the exact behavior of the `allowed_openai_params` argument for acompletion/router? Does it strictly allowlist only those keys or augment the default set of supported OpenAI params?

💡 Result:

In LiteLLM, the allowed_openai_params argument for a completion or router function specifies which OpenAI parameters are permitted in the request. By default, LiteLLM restricts the use of certain OpenAI parameters to ensure compatibility with the selected model and provider. If you encounter a litellm.UnsupportedParamsError due to the inclusion of a parameter not supported by the current model or provider, you can use allowed_openai_params to explicitly allow that parameter. This approach does not augment the default set of supported OpenAI parameters; instead, it serves as an allowlist, permitting specific parameters that would otherwise be restricted.

For example, if you need to use the tools parameter with a model that doesn't support it by default, you can include it in the allowed_openai_params list:

await litellm.acompletion(
    model="azure/o_series/<my-deployment-name>",
    api_key="xxxxx",
    api_base=api_base,
    messages=[{"role": "user", "content": "Hello! return a json object"}],
    tools=[{"type": "function", "function": {"name": "get_current_time", "description": "Get the current time in a given location.", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city name, e.g. San Francisco"}}, "required": ["location"]}}}],
    allowed_openai_params=["tools"],
)

In this example, specifying allowed_openai_params=["tools"] tells LiteLLM to permit the tools parameter in the request, even if it's not part of the default supported parameters for the chosen model and provider. (docs.litellm.ai)

Therefore, the allowed_openai_params argument functions as an explicit allowlist, enabling the use of specific OpenAI parameters that are not included in the default set of supported parameters for the selected model and provider.

Citations:


🌐 Web query:

Does litellm Router.acompletion accept a `timeout` parameter similar to `litellm.acompletion`?

💡 Result:

Yes, the Router.acompletion method in LiteLLM accepts a timeout parameter, similar to litellm.acompletion. This parameter allows you to specify the maximum duration (in seconds) that the completion request should wait for a response before timing out.

You can set the timeout parameter in two ways:

  1. Global Timeout for All Calls: When initializing the Router, you can set a global timeout that applies to all completion requests made through that router instance.

    from litellm import Router
    
    model_list = [{...}]
    router = Router(model_list=model_list, timeout=30)  # Timeout set to 30 seconds

    In this example, any completion request made through router will have a timeout of 30 seconds. (docs.litellm.ai)

  2. Timeout for Individual Models: You can specify a timeout for each model by including the timeout parameter within the litellm_params for that model.

    from litellm import Router
    import asyncio
    
    model_list = [{
        "model_name": "gpt-3.5-turbo",
        "litellm_params": {
            "model": "azure/chatgpt-v-2",
            "api_key": os.getenv("AZURE_API_KEY"),
            "api_version": os.getenv("AZURE_API_VERSION"),
            "api_base": os.getenv("AZURE_API_BASE"),
            "timeout": 300  # Timeout set to 300 seconds (5 minutes)
        }
    }]
    
    router = Router(model_list=model_list, routing_strategy="least-busy")
    
    async def router_acompletion():
        response = await router.acompletion(
            model="gpt-3.5-turbo",
            messages=[{"role": "user", "content": "Hey, how's it going?"}]
        )
        print(response)
        return response
    
    asyncio.run(router_acompletion())

    In this setup, the gpt-3.5-turbo model has a timeout of 300 seconds (5 minutes). (docs.litellm.ai)

By configuring the timeout parameter, you can control how long the system waits for a response before considering the request as failed due to a timeout.

Citations:


Enforce reasoning_effort gating and enable router timeout

LiteLLM’s allowed_openai_params is a strict allowlist (it doesn’t augment the default set), so we must only pass ["reasoning_effort"] when that key is actually in generation_kwargs. Also, since Router.acompletion supports a timeout kwarg, forward self._timeout to keep parity with direct calls.

Please update wren-ai-service/src/providers/llm/litellm.py around lines 103–106 and the router call:

• Add a presence check for "reasoning_effort" before constructing allowed_openai_params.
• Pass timeout=self._timeout to self._router.acompletion.

Example diff:

@@ lines 101–108
-            allowed_params = (
-                ["reasoning_effort"] if self._model.startswith("gpt-5") else None
-            )
+            has_reasoning = "reasoning_effort" in generation_kwargs
+            allowed_params = (
+                ["reasoning_effort"]
+                if self._model.startswith("gpt-5") and has_reasoning
+                else None
+            )

@@ lines 111–116
-                completion = await self._router.acompletion(
+                completion = await self._router.acompletion(
                     model=self._model,
                     messages=openai_formatted_messages,
                     stream=streaming_callback is not None,
-                    allowed_openai_params=allowed_params,
+                    allowed_openai_params=allowed_params,
+                    timeout=self._timeout,
                     mock_testing_fallbacks=self._enable_fallback_testing,
                     **generation_kwargs,
                 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
allowed_params = (
["reasoning_effort"] if self._model.startswith("gpt-5") else None
)
# Only allow reasoning_effort if the model is GPT-5 and the caller actually provided it
has_reasoning = "reasoning_effort" in generation_kwargs
allowed_params = (
["reasoning_effort"]
if self._model.startswith("gpt-5") and has_reasoning
else None
)
completion = await self._router.acompletion(
model=self._model,
messages=openai_formatted_messages,
stream=streaming_callback is not None,
allowed_openai_params=allowed_params,
timeout=self._timeout,
mock_testing_fallbacks=self._enable_fallback_testing,
**generation_kwargs,
)
🤖 Prompt for AI Agents
In wren-ai-service/src/providers/llm/litellm.py around lines 103 to 106, update
the allowed_params assignment to include "reasoning_effort" only if it is
present in generation_kwargs, enforcing the gating. Additionally, when calling
self._router.acompletion, pass timeout=self._timeout to maintain parity with
direct calls. This ensures allowed_openai_params is strictly filtered and the
router respects the timeout setting.

max_completion_tokens: 4096
n: 1
seed: 0
reasoning_effort: minimal
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Reasoning effort for GPT-5: looks good; verify accepted values and scope.

  • LGTM adding reasoning_effort: minimal only to GPT-5 variants; 4.1 models remain unchanged.
  • Please confirm the accepted enum for this field (e.g., minimal/medium/maximal) and that litellm forwards it only for GPT-5 as intended.

Consider adding an inline comment documenting allowed values and that this is GPT-5–specific.

Run to double-check repository-wide consistency:

Also applies to: 40-40, 47-47


🏁 Script executed:

#!/bin/bash
# 1) Ensure we don't set reasoning_effort for non-GPT-5 models
rg -n 'reasoning_effort' -A2 -B4

# 2) Verify all GPT-5 model blocks carry the field
rg -n $'model: gpt-5' -A10 | rg -n 'reasoning_effort' -n

Length of output: 7067


Document accepted reasoning_effort enums and GPT-5 scope

Verified that litellm.py only forwards reasoning_effort when self._model.startswith("gpt-5") (line 104), so the gating logic is correct. To complete this change:

  • Confirm that the only valid enum values are minimal, medium, and maximal.
  • Add an inline comment above each reasoning_effort: minimal entry in the following files, noting that it applies only to GPT-5 models and listing the allowed values:
    • wren-ai-service/tools/config/config.full.yaml (lines 29–33, 36–40, 43–47)
    • wren-ai-service/tools/config/config.example.yaml (lines 29–33, 36–40, 43–47)
    • docker/config.example.yaml (lines 29–33, 36–40, 43–47)
    • deployment/kustomizations/base/cm.yaml (lines 80–84, 85–91, 92–98)
🤖 Prompt for AI Agents
In wren-ai-service/tools/config/config.example.yaml around line 33, add an
inline comment above the reasoning_effort: minimal entry stating that this
setting applies only to GPT-5 models and that the allowed enum values are
minimal, medium, and maximal. Repeat this for the specified line ranges in the
other listed files, ensuring the comment clearly documents the gating logic and
valid enums for reasoning_effort.

max_completion_tokens: 4096
n: 1
seed: 0
reasoning_effort: minimal
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Full config updated correctly for GPT-5; verify no leakage to other models.

Looks correct and consistent with other config files.

Add a short comment near the first GPT-5 model block noting this param is GPT-5–specific.

Also applies to: 40-40, 47-47


🏁 Script executed:

#!/bin/bash
# Assert no gpt-4.* blocks contain reasoning_effort
rg -n 'model:\s*gpt-4' -A8 | rg -n 'reasoning_effort' || echo "No gpt-4 blocks with reasoning_effort found."

Length of output: 152


Add GPT-5–specific comment for reasoning_effort

No leakage into GPT-4 blocks was detected. Please annotate the GPT-5 sections to clarify that reasoning_effort: minimal is GPT-5–only:

• wren-ai-service/tools/config/config.full.yaml
– Line 33 (first GPT-5 model block): add a comment, e.g.:
yaml # This parameter applies only to GPT-5 reasoning_effort: minimal
– Repeat the same comment near the GPT-5 blocks at lines 40 and 47.

🤖 Prompt for AI Agents
In wren-ai-service/tools/config/config.full.yaml at lines 33, 40, and 47, add a
comment above the reasoning_effort: minimal setting to indicate that this
parameter applies only to GPT-5. Insert a line like "# This parameter applies
only to GPT-5" immediately before each reasoning_effort: minimal entry in these
GPT-5 model blocks to clarify its scope.

@cyyeh cyyeh self-requested a review August 8, 2025 11:13
Copy link
Copy Markdown
Member

@cyyeh cyyeh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@cyyeh cyyeh merged commit 43491b2 into main Aug 8, 2025
7 of 9 checks passed
@cyyeh cyyeh deleted the chore/wren-ai-service/add-reasoing-effort-levels branch August 8, 2025 11:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants