I'll be the first to admit I won't be much help in fixing this stuff. I was troubleshooting with Claude's assistance here and had it write this summary. I'll try to provide follow up information as I can. I ran across this because last night, everything was working fine, and this morning when I fired up the webui, all of my local models (llama.cpp) disappeared.
Summary
get_available_models() in api/config.py performs an SSRF check when fetching models from model.base_url. If the hostname resolves to a private IP and doesn't match the hardcoded allowlist (ollama, localhost, 127.0.0.1, lmstudio, lm-studio), it raises ValueError("SSRF: resolved hostname to private IP") and silently falls back to an empty model list.
This blocks legitimate local-network OpenAI-compatible endpoints like llama-swap, vLLM, llama.cpp, TabbyAPI, etc. from populating the model dropdown.
Reproduction
- Run an OpenAI-compatible server on the local Docker network (e.g.
llama-swap at http://llama-swap:8880/v1)
- Configure
config.yaml:
model:
base_url: http://llama-swap:8880/v1
provider: custom
- Verify the endpoint is reachable from inside the container:
docker exec hermes-webui curl -s http://llama-swap:8880/v1/models
# Returns valid model list
- Check
/api/models — returns {"groups": []}
Root cause
api/config.py ~line 1520:
if not is_known_local:
raise ValueError(f"SSRF: resolved hostname to private IP {addr[0]}")
The is_known_local check only matches a handful of hardcoded hostnames. Any other container name or local hostname that resolves to a private/link-local IP is rejected.
Impact
- The exception is caught silently (
except Exception), so there's no user-visible error — models just don't appear
- The empty result gets persisted to
models_cache.json, so the failure is sticky across restarts
- Affects anyone running local inference servers with non-allowlisted hostnames on Docker networks, LAN, etc.
Suggested fix
Either:
- Allow all private IPs when
model.provider is custom (the user has explicitly configured a local endpoint)
- Add a config flag like
model.allow_private_network: true
- Expand the allowlist to be user-configurable
Environment
- hermes-webui: latest (ghcr.io/nesquena/hermes-webui:latest, pulled 2026-04-26)
- Docker Compose, containers on shared bridge network
- Custom provider: llama-swap (OpenAI-compatible, resolves to 172.18.x.x)
I'll be the first to admit I won't be much help in fixing this stuff. I was troubleshooting with Claude's assistance here and had it write this summary. I'll try to provide follow up information as I can. I ran across this because last night, everything was working fine, and this morning when I fired up the webui, all of my local models (llama.cpp) disappeared.
Summary
get_available_models()inapi/config.pyperforms an SSRF check when fetching models frommodel.base_url. If the hostname resolves to a private IP and doesn't match the hardcoded allowlist (ollama,localhost,127.0.0.1,lmstudio,lm-studio), it raisesValueError("SSRF: resolved hostname to private IP")and silently falls back to an empty model list.This blocks legitimate local-network OpenAI-compatible endpoints like llama-swap, vLLM, llama.cpp, TabbyAPI, etc. from populating the model dropdown.
Reproduction
llama-swapathttp://llama-swap:8880/v1)config.yaml:/api/models— returns{"groups": []}Root cause
api/config.py~line 1520:The
is_known_localcheck only matches a handful of hardcoded hostnames. Any other container name or local hostname that resolves to a private/link-local IP is rejected.Impact
except Exception), so there's no user-visible error — models just don't appearmodels_cache.json, so the failure is sticky across restartsSuggested fix
Either:
model.provideriscustom(the user has explicitly configured a local endpoint)model.allow_private_network: trueEnvironment