Skip to content

fix(providers): only flag reasoning=true for known OpenAI reasoning models (#183)#187

Merged
hqhq1025 merged 1 commit intomainfrom
fix/issue-183-reasoning-flag
Apr 23, 2026
Merged

fix(providers): only flag reasoning=true for known OpenAI reasoning models (#183)#187
hqhq1025 merged 1 commit intomainfrom
fix/issue-183-reasoning-flag

Conversation

@hqhq1025
Copy link
Copy Markdown
Collaborator

Fixes #183.

Problem

Custom providers on OpenAI-compatible gateways (Qwen/DashScope, DeepSeek, GLM/BigModel, Moonshot, any openai-chat wire pointing to a non-OpenAI baseUrl) returned:

400 developer is not one of ['system', 'assistant', 'user', 'tool', 'function']

Root cause

packages/providers/src/index.ts::synthesizeWireModel hard-coded reasoning: true on every synthetic PiModel. pi-ai's openai-chat / openai-responses adapters treat model.reasoning === true as "this endpoint supports the Responses API developer role" and rewrite the system prompt role accordingly. developer is OpenAI-Responses-only (GPT-5 / o-family); no third-party OpenAI-compat gateway accepts it.

Fix

New inferReasoning(wire, modelId, baseUrl):

Coverage

Unblocks every OpenAI-compatible Chinese gateway and any generic OpenAI-compat endpoint:

  • Qwen / DashScope (dashscope.aliyuncs.com)
  • DeepSeek (api.deepseek.com)
  • GLM / Zhipu BigModel (open.bigmodel.cn)
  • Moonshot / Kimi
  • Any user-configured LiteLLM / Azure / self-hosted openai-chat gateway

Tests

packages/providers/src/index.test.ts — 9 new inferReasoning cases + 1 integration case asserting Qwen DashScope gets reasoning: false through complete().

Four-principle check (PRINCIPLES §5b)

Out of scope

Didn't touch retry / errors / Settings / agent.ts.

…odels (#183)

`synthesizeWireModel` used to hard-code `reasoning: true` on every
synthetic PiModel. pi-ai's openai-chat / openai-responses adapters read
that flag and emit the system prompt with role `developer` instead of
`system`. `developer` is OpenAI-Responses-only; every OpenAI-compatible
gateway (Qwen/DashScope, DeepSeek, GLM/BigModel, Moonshot, …) rejects it
with HTTP 400.

Add `inferReasoning(wire, modelId, baseUrl)`:
- anthropic / openai-responses / openai-codex-responses -> true
- openai-chat -> true only when baseUrl is api.openai.com AND modelId
  matches a known reasoning family (o1/o3/o4/gpt-5)
- otherwise false

Preserves the #134 fix (openai-responses reasoning) while unblocking
custom-provider users on Qwen, DeepSeek, GLM, Moonshot, etc.

Signed-off-by: hqhq1025 <[email protected]>
@github-actions github-actions Bot added docs Documentation area:providers packages/providers (pi-ai adapter, model calls) labels Apr 23, 2026
Copy link
Copy Markdown
Contributor

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] inferReasoning disables OpenAI-chat reasoning when baseUrl is omitted — this regresses default OpenAI routing for synthesized models because isOpenAIOfficial(undefined) returns false, so known reasoning families (e.g. gpt-5*, o3*) are never flagged even when the endpoint is OpenAI default. Evidence packages/providers/src/index.ts:182, packages/providers/src/index.ts:203
    Suggested fix:
    function isOpenAIOfficial(baseUrl: string | undefined): boolean {
      // undefined means default OpenAI endpoint in pi-ai openai adapters
      if (baseUrl === undefined) return true;
      return /^https:\/\/api\.openai\.com(\/|$)/.test(baseUrl);
    }

Summary

  • Review mode: initial
  • 1 issue found (reasoning detection regression on openai-chat when baseUrl is not set).
  • Not found in repo/docs: docs/VISION.md, docs/PRINCIPLES.md in this checkout, so those constraints could not be re-validated directly for this run.

Testing

  • Not run (automation)
  • Suggested tests: add a unit test asserting inferReasoning('openai-chat', 'gpt-5-turbo', undefined) === true.

* actually know the target accepts it. (#183)
*/
function isOpenAIOfficial(baseUrl: string | undefined): boolean {
if (!baseUrl) return false;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

baseUrl can be undefined when callers rely on the default OpenAI endpoint. Returning false here forces inferReasoning('openai-chat', ...) to disable reasoning even for OpenAI official models. Consider treating undefined as OpenAI-official (or pass provider into inferReasoning) to avoid this regression.

@hqhq1025 hqhq1025 merged commit b7ab2c8 into main Apr 23, 2026
7 checks passed
@hqhq1025 hqhq1025 deleted the fix/issue-183-reasoning-flag branch April 23, 2026 03:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:providers packages/providers (pi-ai adapter, model calls) docs Documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug(providers): custom OpenAI-compat endpoints reject 'developer' role (Qwen/DashScope/DeepSeek/GLM)

1 participant