-
-
Notifications
You must be signed in to change notification settings - Fork 69.6k
[Bug]: Cannot update vLLM base URL; agent caches old config in hidden models.json #37309
Description
Bug type
Behavior bug (incorrect output/state without crash)
Summary
Updating the model baseUrl in openclaw.json does not take effect because the agent continues to use a cached configuration from .openclaw/agent/main/agent/models.json.
When a user corrects a mistakenly entered baseUrl in the main openclaw.json file and restarts the gateway, the changes are ignored. The agent still attempts to send requests to the old URL. Users are forced to manually dig into the internal .openclaw/ directory to modify models.json for the changes to apply, which is unintuitive and creates configuration inconsistencies.
Steps to reproduce
- Set up OpenClaw and select
vLLMas the model (running via a llama.cpp backend). - During the initial configuration, intentionally set an incorrect
baseUrl(e.g.,127.0.0.1/completioninstead of the correct127.0.0.1/v1). - Open
openclaw tuiand send a chat message. - The request fails with a "file not found" error. Checking the llama.cpp logs confirms the request was incorrectly sent to
127.0.0.1/completion/chat/completions. - Open the main
openclaw.jsonconfig file and fix thebaseUrlto the correct path (127.0.0.1/v1). - Restart the gateway.
- Open
openclaw tuiagain and send another message. - Check the llama.cpp logs again. The request is still being sent to the old, incorrect endpoint (127.0.0.1/completion/chat/completions).
- Navigate to
.openclaw/agent/main/agent/models.jsonand observe that the oldbaseUrlis still stored there.
p.s. 127.0.0.1 can change to your target server ip.
Expected behavior
The openclaw.json file should be the single source of truth for global configuration. Modifying the baseUrl in openclaw.json and restarting the gateway should automatically update or override the internal agent configurations. The user should not have to manually edit files inside the hidden .openclaw/agent/main/agent/ directory.
Actual behavior
The configuration in openclaw.json does not sync with the agent's internal state. The agent continues to use the outdated baseUrl stored in .openclaw/agent/main/agent/models.json. The only way to fix the issue and successfully route requests to v1/chat/completions is to manually find and edit this hidden models.json file.
OpenClaw version
2026.3.2
Operating system
Ubuntu24.04
Install method
curl -fsSL https://openclaw.ai/install.sh | bash
(Package is managed by nvm)
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
vLLM server: llama.cpp
vLLM model: unsloth/gpt-oss-20b-GGUF:Q4_K_M
vLLM config:
"models": {
"mode": "merge",
"providers": {
"vllm": {
"baseUrl": "http://my_vllm_server_ip:8000/v1",
"apiKey": "my_vllm_api_key",
"api": "openai-completions",
"models": [
{
"id": "unsloth/gpt-oss-20b-GGUF:Q4_K_M",
"name": "unsloth/gpt-oss-20b-GGUF:Q4_K_M",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
},