Conversation
…odels (#183) `synthesizeWireModel` used to hard-code `reasoning: true` on every synthetic PiModel. pi-ai's openai-chat / openai-responses adapters read that flag and emit the system prompt with role `developer` instead of `system`. `developer` is OpenAI-Responses-only; every OpenAI-compatible gateway (Qwen/DashScope, DeepSeek, GLM/BigModel, Moonshot, …) rejects it with HTTP 400. Add `inferReasoning(wire, modelId, baseUrl)`: - anthropic / openai-responses / openai-codex-responses -> true - openai-chat -> true only when baseUrl is api.openai.com AND modelId matches a known reasoning family (o1/o3/o4/gpt-5) - otherwise false Preserves the #134 fix (openai-responses reasoning) while unblocking custom-provider users on Qwen, DeepSeek, GLM, Moonshot, etc. Signed-off-by: hqhq1025 <[email protected]>
Contributor
There was a problem hiding this comment.
Findings
- [Major]
inferReasoningdisables OpenAI-chat reasoning whenbaseUrlis omitted — this regresses default OpenAI routing for synthesized models becauseisOpenAIOfficial(undefined)returnsfalse, so known reasoning families (e.g.gpt-5*,o3*) are never flagged even when the endpoint is OpenAI default. Evidencepackages/providers/src/index.ts:182,packages/providers/src/index.ts:203
Suggested fix:function isOpenAIOfficial(baseUrl: string | undefined): boolean { // undefined means default OpenAI endpoint in pi-ai openai adapters if (baseUrl === undefined) return true; return /^https:\/\/api\.openai\.com(\/|$)/.test(baseUrl); }
Summary
- Review mode: initial
- 1 issue found (reasoning detection regression on
openai-chatwhenbaseUrlis not set). - Not found in repo/docs:
docs/VISION.md,docs/PRINCIPLES.mdin this checkout, so those constraints could not be re-validated directly for this run.
Testing
- Not run (automation)
- Suggested tests: add a unit test asserting
inferReasoning('openai-chat', 'gpt-5-turbo', undefined) === true.
| * actually know the target accepts it. (#183) | ||
| */ | ||
| function isOpenAIOfficial(baseUrl: string | undefined): boolean { | ||
| if (!baseUrl) return false; |
Contributor
There was a problem hiding this comment.
baseUrl can be undefined when callers rely on the default OpenAI endpoint. Returning false here forces inferReasoning('openai-chat', ...) to disable reasoning even for OpenAI official models. Consider treating undefined as OpenAI-official (or pass provider into inferReasoning) to avoid this regression.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #183.
Problem
Custom providers on OpenAI-compatible gateways (Qwen/DashScope, DeepSeek, GLM/BigModel, Moonshot, any
openai-chatwire pointing to a non-OpenAI baseUrl) returned:Root cause
packages/providers/src/index.ts::synthesizeWireModelhard-codedreasoning: trueon every syntheticPiModel. pi-ai's openai-chat / openai-responses adapters treatmodel.reasoning === trueas "this endpoint supports the Responses APIdeveloperrole" and rewrite the system prompt role accordingly.developeris OpenAI-Responses-only (GPT-5 / o-family); no third-party OpenAI-compat gateway accepts it.Fix
New
inferReasoning(wire, modelId, baseUrl):anthropic->trueopenai-responses/openai-codex-responses->true(preserves bug(desktop): macOS generation fails with Instructions are required and no visible output #134)openai-chat->trueonly when baseUrl isapi.openai.comAND modelId matches^(o[134]|gpt-5)(OpenAI reasoning families)falseCoverage
Unblocks every OpenAI-compatible Chinese gateway and any generic OpenAI-compat endpoint:
dashscope.aliyuncs.com)api.deepseek.com)open.bigmodel.cn)Tests
packages/providers/src/index.test.ts— 9 newinferReasoningcases + 1 integration case asserting Qwen DashScope getsreasoning: falsethroughcomplete().Four-principle check (PRINCIPLES §5b)
truewith a predicateOut of scope
Didn't touch retry / errors / Settings / agent.ts.