-
-
Notifications
You must be signed in to change notification settings - Fork 69.6k
[Bug]: 404 status code (body not found) #50719
Description
Bug type
Regression (worked before, now fails)
Summary
Local model GLM 4.7 Flash stopped working after I upgraded OpenClaw on 19 March 2026.
Steps to reproduce
- Have a setup with a local GLM 4.7 flash model
curl -fsSL https://openclaw.ai/install-cli.sh | bashmodel stops working.curl -fsSL https://openclaw.ai/install-cli.sh | bash -s -- --version 2026.3.12and all works again
Expected behavior
Local GLM 4.7 model works.
Actual behavior
404 status code (body not found) in the tui and telegram when using the local GLM 4.7 flash model.
In the logs following appears:
23:46:09 warn agent/embedded {"subsystem":"agent/embedded"} {"event":"embedded_run_agent_end","tags":["error_handling","lifecycle","agent_end","assistant_error"],"runId":"94aded97-7c12-45c0-bfd7-d18f1d6d2e38","isError":true,"error":"404 status code (no body)","failoverReason":null,"model":"mlx-community/GLM-4.7-Flash-4bit","provider":"local-vllm","rawErrorPreview":"404 status code (no body)","rawErrorHash":"sha256:b02359c4c992"} embedded run agent end
23:46:09 warn 00:46:09 [agent/embedded] embedded run agent end: runId=94aded97-7c12-45c0-bfd7-d18f1d6d2e38 isError=true model=mlx-community/GLM-4.7-Flash-4bit provider=local-vllm error=404 status code (no body)
``
### OpenClaw version
OpenClaw installed (OpenClaw 2026.3.13 (61d171a)).
### Operating system
macOS
### Install method
curl -fsSL https://openclaw.ai/install-cli.sh | bash
### Model
mlx-community/GLM-4.7-Flash-4bit
### Provider / routing chain
vllm-mlx -> mlx-community/GLM-4.7-Flash-4bit
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
In the logs following appears:
23:46:09 warn agent/embedded {"subsystem":"agent/embedded"} {"event":"embedded_run_agent_end","tags":["error_handling","lifecycle","agent_end","assistant_error"],"runId":"94aded97-7c12-45c0-bfd7-d18f1d6d2e38","isError":true,"error":"404 status code (no body)","failoverReason":null,"model":"mlx-community/GLM-4.7-Flash-4bit","provider":"local-vllm","rawErrorPreview":"404 status code (no body)","rawErrorHash":"sha256:b02359c4c992"} embedded run agent end
23:46:09 warn 00:46:09 [agent/embedded] embedded run agent end: runId=94aded97-7c12-45c0-bfd7-d18f1d6d2e38 isError=true model=mlx-community/GLM-4.7-Flash-4bit provider=local-vllm error=404 status code (no body)
``
Impact and severity
Local models (GLM 4.7) stop working
Additional information
Going back to version 2026.3.12 solves the problem without any other change.
So "OpenClaw installed (OpenClaw 2026.3.13 (61d171a))." is the one that breaks it.
Workaround is installing curl -fsSL https://openclaw.ai/install-cli.sh | bash -s -- --version 2026.3.12.
With my setup it's very repeatable: same config... install new version, 404 error; install old version, all works; install new version, 404 error.