Name and Version
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
version: 8247 (ae87863)
built with GNU 13.3.0 for Linux x86_64
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server
Command line
llama-server \
--models-preset ./presets.ini \
--models-max 1 \
Problem description & steps to reproduce
Not sure if this is considered a bug or a feature request, but I couldn't find any documentation about --webui-mcp-proxy / MCP CORS proxy in the server README (or anywhere else).
Seems like at least a mention ought to be there.
(And ideally why you might want to use it for local MCP servers.)
First Bad Commit
No response
Relevant log output
No response