Skip to content

fix(desktop): degrade-probe test connection for OpenAI-compat endpoints without /models (#179)#182

Merged
hqhq1025 merged 2 commits intomainfrom
fix/179-glm-test-connection-degraded-probe
Apr 23, 2026
Merged

fix(desktop): degrade-probe test connection for OpenAI-compat endpoints without /models (#179)#182
hqhq1025 merged 2 commits intomainfrom
fix/179-glm-test-connection-degraded-probe

Conversation

@hqhq1025
Copy link
Copy Markdown
Collaborator

Fixes #179.

Summary

  • Test connection degrade-probe. When GET /models returns 404 on openai-chat or openai-responses wires, fall back to a minimal POST /chat/completions before declaring the endpoint dead. A 2xx or any API-originated 4xx (400 model_unknown / 402 insufficient credits / 422 / 429) counts as pass; 401/403 is re-surfaced as an auth error; only 404 + 5xx + network errors keep the original failure. Anthropic wires are intentionally not degraded — /v1/models is standard there. Successful responses now carry probeMethod: 'models' | 'chat_completion_degraded' so the renderer can surface "/models unavailable but /chat/completions responded" for gateways like Zhipu GLM.
  • Suppress misleading missingV1 diagnostic hint. diagnose() in @open-codesign/shared now skips the "add /v1" hypothesis when the baseUrl already carries a /v\d+ segment (GLM /v4, AI Studio /v1beta, Cloudflare Workers AI /v1, ...). Previously the panel would offer to corrupt a perfectly correct baseUrl by appending /v1.

Reproducing the GLM scenario (issue #179)

Adding Zhipu GLM as a custom provider (OpenAI Chat wire, baseUrl https://open.bigmodel.cn/api/paas/v4) used to fail "Test connection" with HTTP 404 plus an auto-suggest to append /v1. After this change:

  • The probe sees /models return 404, degrades to POST /chat/completions, sees a real response from the gateway, and reports ok: true with probeMethod: 'chat_completion_degraded'.
  • If the user's stored key is bad, the degrade probe's 401 is surfaced as the auth-error hint rather than the 404 one.
  • If the baseUrl really is wrong and both endpoints 404, the original 404 error is preserved (no silent pass).

Test plan

  • pnpm test (881 tests pass)
  • pnpm typecheck (clean)
  • pnpm lint (clean)
  • New Vitest cases in apps/desktop/src/main/connection-ipc.test.ts cover: GLM 404 /models + 200 /chat/completions, 404 + 404 (preserves original 404), 404 + 400 (model_unknown still passes), 404 + 401 (surfaces auth error), 200 /models (no probe, probeMethod=models), anthropic 404 (no degrade), openai-responses degrade.
  • New diagnostics.test.ts cases cover: GLM /v4, Cloudflare Workers AI /v1, AI Studio /v1beta — all skip missingV1; plain host without version still suggests missingV1.

PRINCIPLES §5b

  • Compatibility: green — response type is additive (probeMethod is optional when reading existing responses).
  • Upgradeability: green — no schema/IPC version bump needed; callers that don't read probeMethod keep working.
  • No bloat: green — ~55 lines added to one helper plus two tiny diagnostic tweaks; no new deps.
  • Elegance: green — degrade is scoped to the exact wires that need it, auth failures are correctly re-routed, and the fallback is invisible to happy-path providers.

…le on OpenAI-compat endpoints (#179)

Some OpenAI-compatible gateways (Zhipu GLM at /api/paas/v4 is the
reported case, but any proxy that omits a public /models endpoint fits)
return HTTP 404 for GET /models even though /chat/completions works
fine. The "Test connection" button was reporting a hard 404 failure for
those providers and the diagnostics panel was suggesting "add /v1",
which would corrupt a correct baseUrl.

Two changes:

1. runProviderTest now falls back to POST /chat/completions with a
   minimal probe request when GET /models returns 404 on openai-chat or
   openai-responses wires. Any 2xx or API-originated 4xx (400/402/422/
   429) counts as "endpoint reachable". 401/403 is surfaced as an auth
   error instead of the misleading 404 hint. Anthropic wires do not
   degrade — /v1/models is standard there. Success now carries a
   probeMethod field so the renderer can distinguish a full pass from
   a degraded pass.

2. diagnose() in @open-codesign/shared skips the missingV1 hypothesis
   when the baseUrl already carries a /v\d+ segment (GLM /v4, AI Studio
   /v1beta, Cloudflare Workers AI /v1, ...) and returns the generic
   unknown cause instead, so users are never pushed into duplicating
   version segments.

Signed-off-by: hqhq1025 <[email protected]>
@github-actions github-actions Bot added the area:desktop apps/desktop (Electron shell, renderer) label Apr 23, 2026
Copy link
Copy Markdown
Contributor

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] openai-responses degrade-probe hits /chat/completions instead of the selected wire endpoint, which can produce false-positive connection success for providers that support chat completions but fail on the Responses API actually used at runtime. Evidence: apps/desktop/src/main/connection-ipc.ts:496, apps/desktop/src/main/connection-ipc.ts:530, and the matching expectation in apps/desktop/src/main/connection-ipc.test.ts:1015.
    Suggested fix:
    async function probeChatCompletion(
      wire: WireApi,
      normalizedBaseUrl: string,
      headers: Record<string, string>,
    ): Promise<ProbeResult> {
      const path = wire === 'openai-responses' ? '/responses' : '/chat/completions';
      const url = `${normalizedBaseUrl}${path}`;
      // keep existing fetchWithTimeout + status classification
    }
    
    // call site
    const probe = await probeChatCompletion(creds.wire, normalizedBaseUrl, headers);

Summary

  • Review mode: initial
  • 1 issue found in the latest head diff: openai-responses fallback probe path is inconsistent with selected wire semantics and may mask runtime failures.

Testing

  • Not run (automation). Local test execution unavailable here because pnpm is not installed in this runner.

open-codesign Bot

Comment thread apps/desktop/src/main/connection-ipc.ts Outdated
normalizedBaseUrl: string,
headers: Record<string, string>,
): Promise<ProbeResult> {
const url = `${normalizedBaseUrl}/chat/completions`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

openai-responses is grouped into this degrade path, but the probe URL is hardcoded to /chat/completions. That can mark a provider healthy even when its actual Responses endpoint fails at runtime. Consider selecting the probe path by wire (/responses for openai-responses, /chat/completions for openai-chat) and update the new test accordingly.

Copy link
Copy Markdown
Contributor

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • None.

Summary

  • Review mode: follow-up after new commits
  • No new issues found in the latest head diff. The prior openai-responses false-positive probe-path issue appears addressed by wire-specific degrade probing in apps/desktop/src/main/connection-ipc.ts and corresponding tests.
  • Residual risk/testing gap: Not found in repo/docs for docs/VISION.md and docs/PRINCIPLES.md in this checkout, so policy cross-check was limited to CLAUDE.md + changed files.

Testing

  • Not run (automation)

open-codesign Bot

@hqhq1025 hqhq1025 merged commit 83e81e0 into main Apr 23, 2026
7 checks passed
@hqhq1025 hqhq1025 deleted the fix/179-glm-test-connection-degraded-probe branch April 23, 2026 03:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:desktop apps/desktop (Electron shell, renderer)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug(desktop): custom provider test connection fails with HTTP 404 on GLM/OpenAI-compat endpoints that don't expose /models

1 participant