Skip to content

Commit dc26bd9

Browse files
author
ningding97
committed
fix: increase VERIFY_TIMEOUT_MS from 10s to 120s for local LLMs
Local LLMs (e.g. Qwen3 14B on laptop GPU via llama.cpp) can easily exceed 10s for the verification response. Bump to 120s so onboarding does not fail for users with slower local models. Closes #28972
1 parent f943c76 commit dc26bd9

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/commands/onboard-custom.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ import type { SecretInputMode } from "./onboard-types.js";
1818
const DEFAULT_OLLAMA_BASE_URL = "http://127.0.0.1:11434/v1";
1919
const DEFAULT_CONTEXT_WINDOW = 4096;
2020
const DEFAULT_MAX_TOKENS = 4096;
21-
const VERIFY_TIMEOUT_MS = 10000;
21+
const VERIFY_TIMEOUT_MS = 120_000;
2222

2323
/**
2424
* Detects if a URL is from Azure AI Foundry or Azure OpenAI.

0 commit comments

Comments
 (0)