Skip to content

Custom Provider baseUrl Not Propagated to Model Objects #2903

@warpapaya

Description

@warpapaya

Summary

When configuring a custom OpenAI-compatible provider (e.g., Ollama) via models.providers in the config, the baseUrl property is accepted at the provider level but not passed through to individual model objects during model resolution. This causes API requests to fail or be sent to the wrong endpoint.

Environment

  • Moltbot Version: 2026.1.24-3
  • Platform: macOS (Darwin 25.2.0)
  • Node Version: 24.12.0

Steps to Reproduce

  1. Configure a custom provider in ~/.clawdbot/clawdbot.json:
{
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://127.0.0.1:11434/v1",
        "apiKey": "ollama-local",
        "api": "openai-completions",
        "models": [
          {
            "id": "my-model:latest",
            "name": "My Local Model",
            "api": "openai-completions",
            "reasoning": false,
            "input": ["text"],
            "contextWindow": 131072,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "ollama/my-model:latest"
      }
    }
  }
}
  1. Start the gateway and send a message
  2. Observe that the request fails or goes to the wrong endpoint

Expected Behavior

The baseUrl from the provider config should be included in the model object passed to pi-ai, allowing requests to be routed to the custom endpoint.

Actual Behavior

The model object passed to pi-ai has baseUrl: undefined, causing:

  • Requests being sent to default OpenAI endpoints
  • 401 errors with messages like "Incorrect API key provided: ollama-local"
  • Or silent failures with no response

Root Cause

In dist/agents/pi-embedded-runner/model.js, the buildInlineProviderModels() function spreads model properties but does not include the provider's baseUrl or apiKey:

// Line 11 - Current (buggy) code:
return (entry?.models ?? []).map((model) => ({ ...model, provider: trimmed }));

The same issue exists in the fallback model creation path (lines 50-60) where providerCfg.baseUrl is not included.

Proposed Fix

Update buildInlineProviderModels() to include baseUrl and apiKey from the provider entry:

// Line 11 - Fixed:
return (entry?.models ?? []).map((model) => ({
  ...model,
  provider: trimmed,
  baseUrl: entry?.baseUrl,
  apiKey: entry?.apiKey
}));

And update the fallback model creation:

// Lines 50-60 - Add baseUrl and apiKey:
const fallbackModel = normalizeModelCompat({
  id: modelId,
  name: modelId,
  api: providerCfg?.api ?? "openai-responses",
  provider,
  baseUrl: providerCfg?.baseUrl,  // ADD THIS
  apiKey: providerCfg?.apiKey,    // ADD THIS
  reasoning: false,
  input: ["text"],
  cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
  contextWindow: providerCfg?.models?.[0]?.contextWindow ?? DEFAULT_CONTEXT_TOKENS,
  maxTokens: providerCfg?.models?.[0]?.maxTokens ?? DEFAULT_CONTEXT_TOKENS,
});

Workaround

Users can manually patch node_modules/clawdbot/dist/agents/pi-embedded-runner/model.js with the fix above. Note that this patch will be overwritten on updates.

Related Issues

Additional Context

This bug affects any custom OpenAI-compatible provider configured via models.providers, not just Ollama. The pi-ai library correctly expects model.baseUrl to be set (see openai-completions.js:304), but moltbot's model resolution doesn't provide it.

After applying the patch, local Ollama integration works correctly with proper request routing and response handling.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions