Skip to content

Feature Request: Custom baseUrl for model providers (OpenAI-compatible proxies) #2305

@zaNevar-dev

Description

@zaNevar-dev

Summary

I'd like to route model requests through a local LiteLLM proxy for smart model routing (local Ollama → Gemini → Claude), but the current openai provider doesn't respect custom baseUrl configuration.

Use Case

  • Run LiteLLM proxy locally on port 4000
  • Route family/work agents through it for cost savings
  • LiteLLM handles fallbacks: Local Ollama GPUs → Gemini free tier → Claude premium
  • Main agent stays on Claude for quality

Current Behavior

Setting OPENAI_BASE_URL environment variable or adding models.providers.openai.baseUrl in config is ignored. The openai provider always goes to api.openai.com.

Desired Behavior

Allow configuring a custom baseUrl for the openai provider (or any provider), so requests can be routed through local proxies like LiteLLM, Ollama, or other OpenAI-compatible endpoints.

Proposed Config

{
  models: {
    providers: {
      openai: {
        baseUrl: "http://localhost:4000/v1",  // LiteLLM proxy
        apiKey: "sk-local-key"
      }
    }
  }
}

Or per-agent:

{
  agents: {
    list: [{
      id: "family",
      model: {
        primary: "openai/gpt-4o",
        providerConfig: {
          baseUrl: "http://localhost:4000/v1"
        }
      }
    }]
  }
}

Environment

  • Clawdbot version: 2026.1.23-1
  • LiteLLM proxy running with Ollama backend (3 local GPUs)

Workarounds Attempted

  1. Setting OPENAI_BASE_URL in systemd service - ignored
  2. Adding models.providers.openai.baseUrl in config - invalid config error
  3. Using models.providers.custom.baseUrl - only works for tools, not model providers

Thanks for considering! This would enable significant cost savings by routing bulk/simple requests to local models. 🦊

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions