-
-
Notifications
You must be signed in to change notification settings - Fork 69.5k
Feature Request: Custom baseUrl for model providers (OpenAI-compatible proxies) #2305
Copy link
Copy link
Closed as not planned
Closed as not planned
Copy link
Labels
enhancementNew feature or requestNew feature or request
Description
Summary
I'd like to route model requests through a local LiteLLM proxy for smart model routing (local Ollama → Gemini → Claude), but the current openai provider doesn't respect custom baseUrl configuration.
Use Case
- Run LiteLLM proxy locally on port 4000
- Route family/work agents through it for cost savings
- LiteLLM handles fallbacks: Local Ollama GPUs → Gemini free tier → Claude premium
- Main agent stays on Claude for quality
Current Behavior
Setting OPENAI_BASE_URL environment variable or adding models.providers.openai.baseUrl in config is ignored. The openai provider always goes to api.openai.com.
Desired Behavior
Allow configuring a custom baseUrl for the openai provider (or any provider), so requests can be routed through local proxies like LiteLLM, Ollama, or other OpenAI-compatible endpoints.
Proposed Config
{
models: {
providers: {
openai: {
baseUrl: "http://localhost:4000/v1", // LiteLLM proxy
apiKey: "sk-local-key"
}
}
}
}Or per-agent:
{
agents: {
list: [{
id: "family",
model: {
primary: "openai/gpt-4o",
providerConfig: {
baseUrl: "http://localhost:4000/v1"
}
}
}]
}
}Environment
- Clawdbot version: 2026.1.23-1
- LiteLLM proxy running with Ollama backend (3 local GPUs)
Workarounds Attempted
- Setting
OPENAI_BASE_URLin systemd service - ignored - Adding
models.providers.openai.baseUrlin config - invalid config error - Using
models.providers.custom.baseUrl- only works for tools, not model providers
Thanks for considering! This would enable significant cost savings by routing bulk/simple requests to local models. 🦊
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request
Type
Fields
Give feedbackNo fields configured for issues without a type.