Skip to content

[Bug]: Cannot update vLLM base URL; agent caches old config in hidden models.json #37309

@Mirai1129

Description

@Mirai1129

Bug type

Behavior bug (incorrect output/state without crash)

Summary

Updating the model baseUrl in openclaw.json does not take effect because the agent continues to use a cached configuration from .openclaw/agent/main/agent/models.json.

When a user corrects a mistakenly entered baseUrl in the main openclaw.json file and restarts the gateway, the changes are ignored. The agent still attempts to send requests to the old URL. Users are forced to manually dig into the internal .openclaw/ directory to modify models.json for the changes to apply, which is unintuitive and creates configuration inconsistencies.

Steps to reproduce

  1. Set up OpenClaw and select vLLM as the model (running via a llama.cpp backend).
  2. During the initial configuration, intentionally set an incorrect baseUrl (e.g., 127.0.0.1/completion instead of the correct 127.0.0.1/v1).
  3. Open openclaw tui and send a chat message.
  4. The request fails with a "file not found" error. Checking the llama.cpp logs confirms the request was incorrectly sent to 127.0.0.1/completion/chat/completions.
  5. Open the main openclaw.json config file and fix the baseUrl to the correct path (127.0.0.1/v1).
  6. Restart the gateway.
  7. Open openclaw tui again and send another message.
  8. Check the llama.cpp logs again. The request is still being sent to the old, incorrect endpoint (127.0.0.1/completion/chat/completions).
  9. Navigate to .openclaw/agent/main/agent/models.json and observe that the old baseUrl is still stored there.

p.s. 127.0.0.1 can change to your target server ip.

Expected behavior

The openclaw.json file should be the single source of truth for global configuration. Modifying the baseUrl in openclaw.json and restarting the gateway should automatically update or override the internal agent configurations. The user should not have to manually edit files inside the hidden .openclaw/agent/main/agent/ directory.

Actual behavior

The configuration in openclaw.json does not sync with the agent's internal state. The agent continues to use the outdated baseUrl stored in .openclaw/agent/main/agent/models.json. The only way to fix the issue and successfully route requests to v1/chat/completions is to manually find and edit this hidden models.json file.

OpenClaw version

2026.3.2

Operating system

Ubuntu24.04

Install method

curl -fsSL https://openclaw.ai/install.sh | bash
(Package is managed by nvm)

Logs, screenshots, and evidence

Impact and severity

No response

Additional information

vLLM server: llama.cpp
vLLM model: unsloth/gpt-oss-20b-GGUF:Q4_K_M
vLLM config:

"models": {
    "mode": "merge",
    "providers": {
      "vllm": {
        "baseUrl": "http://my_vllm_server_ip:8000/v1",
        "apiKey": "my_vllm_api_key",
        "api": "openai-completions",
        "models": [
          {
            "id": "unsloth/gpt-oss-20b-GGUF:Q4_K_M",
            "name": "unsloth/gpt-oss-20b-GGUF:Q4_K_M",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 128000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingbug:behaviorIncorrect behavior without a crash

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions