Skip to content

Comments

fix: add pnpm patch for pi-ai to support LiteLLM providerType#23

Closed
tiagoefreitas wants to merge 1 commit intoopenclaw:mainfrom
tiagoefreitas:fix/litellm-providertype
Closed

fix: add pnpm patch for pi-ai to support LiteLLM providerType#23
tiagoefreitas wants to merge 1 commit intoopenclaw:mainfrom
tiagoefreitas:fix/litellm-providertype

Conversation

@tiagoefreitas
Copy link

Summary

  • Adds pnpm patch for @mariozechner/pi-ai to support providerType field for custom models
  • When using Claude models via LiteLLM proxy, the bundled pi-ai sends OpenAI-specific parameters (store, max_completion_tokens) that cause errors with non-OpenAI backends like AWS Bedrock
  • This fix checks model.providerType to determine if the model is OpenAI-native before sending OpenAI-specific params

Problem

When configuring custom models in ~/.pi/agent/models.json that route through LiteLLM to Claude (e.g. AWS Bedrock), the bundled pi-ai would send:

  • store: false (OpenAI-specific)
  • max_completion_tokens instead of max_tokens

This caused errors like:

max_tokens_to_sample: Extra inputs are not permitted

Solution

The patch adds providerType support to openai-completions.js:

  • If model.providerType === "openai" or the baseUrl indicates OpenAI, send OpenAI-specific params
  • Otherwise (for proxied models), use generic params (max_tokens)

Users can now configure custom models with providerType: "anthropic" (or "google", etc.) to properly route requests.

Test plan

  • Tested with Claude Opus 4.5 via LiteLLM proxy to AWS Bedrock
  • Verified Pi responds correctly with the patched version
  • Verified existing OpenAI models still work (fallback to baseUrl detection)

🤖 Generated with Claude Code

When using Claude models via LiteLLM proxy, the bundled pi-ai sends
OpenAI-specific parameters (store, max_completion_tokens) that cause
errors with non-OpenAI backends like AWS Bedrock.

This patch adds providerType support to openai-completions.js:
- Check model.providerType to determine if model is OpenAI-native
- Only send OpenAI-specific params to actual OpenAI models
- Use max_tokens instead of max_completion_tokens for proxy models

Users can now configure custom models in ~/.pi/agent/models.json with
providerType: "anthropic" (or "google", etc.) to properly route
requests through LiteLLM to Claude and other non-OpenAI models.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@steipete
Copy link
Contributor

Interesting addition!

You can maintain your own fork of pi-ai like I do in the config:

      "command": [
        "node",
        "/Users/steipete/Projects/pi-mono/packages/coding-agent/dist/cli.js",
        "--mode",
        "rpc",
        "--provider",
        "anthropic",
        "--model",
        "claude-opus-4-5"
      ],

Also try upstreaming your changes to pi-ai! That's the best long-term solution.

@steipete steipete closed this Dec 10, 2025
Pastorsimon1798 pushed a commit to Pastorsimon1798/openclaw that referenced this pull request Feb 1, 2026
- Create analyze.py (profile, describe, groupby, pivot, filter)
- Create excel.py (read, write, list-sheets, merge/VLOOKUP)
- Create visualize.py (bar, line, scatter, heatmap, histogram, pie, box)
- Update SKILL.md with venv instructions
- Mark openclaw#14 Blogwatcher RESOLVED (installed)
- Mark openclaw#15 Data Analytics RESOLVED
- Mark openclaw#17 Calendar CANCELLED (using GOG)
- Mark openclaw#18 Edison PAUSED
- Mark openclaw#23 Network Fallback PAUSED
dgarson referenced this pull request in dgarson/clawdbot Feb 2, 2026
feat(ui): add skeleton loading screens for overseer and cron views
alexprime1889-prog pushed a commit to alexprime1889-prog/moltbot that referenced this pull request Feb 8, 2026
slathrop referenced this pull request in slathrop/openclaw-js Feb 11, 2026
Tasks completed: 2/2
- Port PR/issue guides + Install sharpening (commits #15, #17, #23)
- Port PR sign-off + guide revisions + markdownlint fixes (commits #24, #26, #35)

SUMMARY: .planning/phases/17-docs/17-03-SUMMARY.md
frodo-harborbot added a commit to harborworks/openclaw that referenced this pull request Feb 16, 2026
…openclaw#23)

The database service's environment block used ${DATABASE_PASSWORD:-postgres}
which is interpolated by docker compose from the host shell — where the var
is never set, so it always defaults to 'postgres'. The backend reads .env.prod
and gets the real password, causing a mismatch.

Fix: remove the environment block from the database service and pass
POSTGRES_DB, POSTGRES_USER, POSTGRES_PASSWORD directly via .env.prod.
The deploy workflow now writes these vars into .env.prod alongside the
existing DATABASE_* vars.
@slayoffer
Copy link

@claude please review this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants