Conversation
…le on OpenAI-compat endpoints (#179) Some OpenAI-compatible gateways (Zhipu GLM at /api/paas/v4 is the reported case, but any proxy that omits a public /models endpoint fits) return HTTP 404 for GET /models even though /chat/completions works fine. The "Test connection" button was reporting a hard 404 failure for those providers and the diagnostics panel was suggesting "add /v1", which would corrupt a correct baseUrl. Two changes: 1. runProviderTest now falls back to POST /chat/completions with a minimal probe request when GET /models returns 404 on openai-chat or openai-responses wires. Any 2xx or API-originated 4xx (400/402/422/ 429) counts as "endpoint reachable". 401/403 is surfaced as an auth error instead of the misleading 404 hint. Anthropic wires do not degrade — /v1/models is standard there. Success now carries a probeMethod field so the renderer can distinguish a full pass from a degraded pass. 2. diagnose() in @open-codesign/shared skips the missingV1 hypothesis when the baseUrl already carries a /v\d+ segment (GLM /v4, AI Studio /v1beta, Cloudflare Workers AI /v1, ...) and returns the generic unknown cause instead, so users are never pushed into duplicating version segments. Signed-off-by: hqhq1025 <[email protected]>
Contributor
There was a problem hiding this comment.
Findings
- [Major]
openai-responsesdegrade-probe hits/chat/completionsinstead of the selected wire endpoint, which can produce false-positive connection success for providers that support chat completions but fail on the Responses API actually used at runtime. Evidence:apps/desktop/src/main/connection-ipc.ts:496,apps/desktop/src/main/connection-ipc.ts:530, and the matching expectation inapps/desktop/src/main/connection-ipc.test.ts:1015.
Suggested fix:async function probeChatCompletion( wire: WireApi, normalizedBaseUrl: string, headers: Record<string, string>, ): Promise<ProbeResult> { const path = wire === 'openai-responses' ? '/responses' : '/chat/completions'; const url = `${normalizedBaseUrl}${path}`; // keep existing fetchWithTimeout + status classification } // call site const probe = await probeChatCompletion(creds.wire, normalizedBaseUrl, headers);
Summary
- Review mode: initial
- 1 issue found in the latest head diff:
openai-responsesfallback probe path is inconsistent with selected wire semantics and may mask runtime failures.
Testing
- Not run (automation). Local test execution unavailable here because
pnpmis not installed in this runner.
open-codesign Bot
| normalizedBaseUrl: string, | ||
| headers: Record<string, string>, | ||
| ): Promise<ProbeResult> { | ||
| const url = `${normalizedBaseUrl}/chat/completions`; |
Contributor
There was a problem hiding this comment.
openai-responses is grouped into this degrade path, but the probe URL is hardcoded to /chat/completions. That can mark a provider healthy even when its actual Responses endpoint fails at runtime. Consider selecting the probe path by wire (/responses for openai-responses, /chat/completions for openai-chat) and update the new test accordingly.
Signed-off-by: hqhq1025 <[email protected]>
Contributor
There was a problem hiding this comment.
Findings
- None.
Summary
- Review mode: follow-up after new commits
- No new issues found in the latest head diff. The prior
openai-responsesfalse-positive probe-path issue appears addressed by wire-specific degrade probing inapps/desktop/src/main/connection-ipc.tsand corresponding tests. - Residual risk/testing gap: Not found in repo/docs for
docs/VISION.mdanddocs/PRINCIPLES.mdin this checkout, so policy cross-check was limited toCLAUDE.md+ changed files.
Testing
- Not run (automation)
open-codesign Bot
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #179.
Summary
GET /modelsreturns 404 onopenai-chatoropenai-responseswires, fall back to a minimalPOST /chat/completionsbefore declaring the endpoint dead. A 2xx or any API-originated 4xx (400 model_unknown / 402 insufficient credits / 422 / 429) counts as pass; 401/403 is re-surfaced as an auth error; only 404 + 5xx + network errors keep the original failure. Anthropic wires are intentionally not degraded —/v1/modelsis standard there. Successful responses now carryprobeMethod: 'models' | 'chat_completion_degraded'so the renderer can surface "/models unavailable but /chat/completions responded" for gateways like Zhipu GLM.missingV1diagnostic hint.diagnose()in@open-codesign/sharednow skips the "add /v1" hypothesis when the baseUrl already carries a/v\d+segment (GLM/v4, AI Studio/v1beta, Cloudflare Workers AI/v1, ...). Previously the panel would offer to corrupt a perfectly correct baseUrl by appending/v1.Reproducing the GLM scenario (issue #179)
Adding Zhipu GLM as a custom provider (OpenAI Chat wire, baseUrl
https://open.bigmodel.cn/api/paas/v4) used to fail "Test connection" with HTTP 404 plus an auto-suggest to append/v1. After this change:/modelsreturn 404, degrades toPOST /chat/completions, sees a real response from the gateway, and reportsok: truewithprobeMethod: 'chat_completion_degraded'.Test plan
pnpm test(881 tests pass)pnpm typecheck(clean)pnpm lint(clean)apps/desktop/src/main/connection-ipc.test.tscover: GLM 404 /models + 200 /chat/completions, 404 + 404 (preserves original 404), 404 + 400 (model_unknown still passes), 404 + 401 (surfaces auth error), 200 /models (no probe,probeMethod=models), anthropic 404 (no degrade), openai-responses degrade.diagnostics.test.tscases cover: GLM/v4, Cloudflare Workers AI/v1, AI Studio/v1beta— all skipmissingV1; plain host without version still suggestsmissingV1.PRINCIPLES §5b
probeMethodis optional when reading existing responses).probeMethodkeep working.