Summary
Adding a custom provider with OpenAI Chat wire + baseUrl `https://open.bigmodel.cn/api/paas/v4\` (GLM / 智谱) + api key → Test Connection shows `HTTP 404`, even though real generation via the same config would work.
Likely affects other Chinese OpenAI-compat endpoints that don't expose a GET `/models` list endpoint (e.g. some Baidu / Tencent / Moonshot / Aliyun paths).
Reproduction
- Settings → API 服务 → Add custom endpoint
- Protocol: OpenAI Chat
- Name: GLM, baseUrl: `https://open.bigmodel.cn/api/paas/v4\`, api key: valid GLM key, default model: `glm-4.6v`
- Click 测试连接 → `HTTP 404`
Root Cause
Connection test (`apps/desktop/src/main/connection-ipc.ts`) probes `GET ${baseUrl}/models` to verify reachability + auth. GLM's `/api/paas/v4` surface exposes `/chat/completions` but does not publish a standards-compliant `/models` JSON endpoint — it returns 404 / HTML.
Real generation POSTs to `/chat/completions` and works fine. So the 404 is a false negative on the probe, not a real outage. Users interpret the red banner as "broken" and stop here.
Impact
- Blocks onboarding for GLM users (a large Chinese user segment)
- Likely similar false negatives on other domestic OpenAI-compat endpoints
- Existing diagnostic hypothesis for 404 (`missingV1`) suggests "append /v1" — but GLM already uses `/v4`, so the hint is wrong and misleading
Expected Behavior
One of:
- (A) If `/models` returns 404, degrade-probe by firing a minimal `/chat/completions` call (short prompt, max_tokens=1) to verify the endpoint is alive
- (B) Show UI hint: "Some providers (GLM, 通义 via non-standard path, etc.) do not expose /models. You can skip the probe and save; test by sending a real prompt."
- (C) Provide a "skip probe" save button that commits the config regardless
- Do not fire the `missingV1` "append /v1" auto-fix hypothesis when baseUrl already ends in `/vN`
Suggested Fix
Prefer (A) as primary + (B) as fallback toast on degrade-probe also failing. Disable `missingV1` when `/v\d+$` regex matches baseUrl.
Related
cc @hqhq1025
Summary
Adding a custom provider with OpenAI Chat wire + baseUrl `https://open.bigmodel.cn/api/paas/v4\` (GLM / 智谱) + api key → Test Connection shows `HTTP 404`, even though real generation via the same config would work.
Likely affects other Chinese OpenAI-compat endpoints that don't expose a GET `/models` list endpoint (e.g. some Baidu / Tencent / Moonshot / Aliyun paths).
Reproduction
Root Cause
Connection test (`apps/desktop/src/main/connection-ipc.ts`) probes `GET ${baseUrl}/models` to verify reachability + auth. GLM's `/api/paas/v4` surface exposes `/chat/completions` but does not publish a standards-compliant `/models` JSON endpoint — it returns 404 / HTML.
Real generation POSTs to `/chat/completions` and works fine. So the 404 is a false negative on the probe, not a real outage. Users interpret the red banner as "broken" and stop here.
Impact
Expected Behavior
One of:
Suggested Fix
Prefer (A) as primary + (B) as fallback toast on degrade-probe also failing. Disable `missingV1` when `/v\d+$` regex matches baseUrl.
Related
cc @hqhq1025