feat: add Requesty as LLM provider#1245
Conversation
Add Requesty as an LLM provider option in config: - Model constants (anthropic/claude-sonnet-4-6 reasoning, anthropic/claude-haiku-4-5 toolcall) - Base URL (https://router.requesty.ai/v1) - LLMModelConfig, LLMProvider Literal, LLMSettings fields
Both default to anthropic/claude-sonnet-4-6.
Accept optional default_headers dict and forward it to the OpenAI constructor. This enables per-provider custom headers like X-Title.
Route LLM_PROVIDER=requesty through OpenAILLMClient with router.requesty.ai/v1 base URL and X-Title: OpenSRE header.
Add REQUESTY_MODELS model catalog, ProviderOption entry in SUPPORTED_PROVIDERS, and validation base URL resolution.
Register REQUESTY_API_KEY in health check and add requesty to the interactive shell provider name set.
Add requesty_api_key to provider_to_key and env_var maps in the model validator, and add env var resolution in from_env().
Greptile SummaryThis PR adds Requesty as a new OpenAI-compatible LLM gateway provider, following the same integration pattern used by OpenRouter, Gemini, and NVIDIA. Changes span config constants, Confidence Score: 4/5Safe to merge with minor cleanup; no runtime-breaking issues found. Only P2 findings: a duplicate label in the wizard model list and a misplaced dict entry in from_env. Core integration logic is consistent with existing provider patterns. app/cli/wizard/config.py — duplicate "Claude Sonnet 4.6 (via Requesty)" label in REQUESTY_MODELS. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[LLM_PROVIDER=requesty] --> B[LLMSettings.from_env]
B --> C{provider == requesty?}
C -->|Yes| D[_create_llm_client]
D --> E[Load REQUESTY_LLM_CONFIG & REQUESTY_BASE_URL]
E --> F[OpenAILLMClient\nbase_url=router.requesty.ai/v1\ndefault_headers: X-Title OpenSRE]
F --> G[OpenAI SDK to Requesty Gateway]
G --> H{Route by model prefix}
H --> I[anthropic/ to Claude]
H --> J[bedrock/ to AWS Bedrock]
H --> K[openai/ to GPT]
H --> L[vertex/ to Gemini]
Reviews (1): Last reviewed commit: "fix: add Requesty to LLMSettings validat..." | Re-trigger Greptile |
- Disambiguate duplicate "Claude Sonnet 4.6" labels in REQUESTY_MODELS by adding "Bedrock" qualifier to bedrock/ model entries - Move requesty_api_key to the API key group in from_env() for consistency with other providers
|
@Thibault00 CI is failing and please address greptile issues! |
VaibhavUpreti
left a comment
There was a problem hiding this comment.
Awesome @Thaibat - really appreciate the PR! Great to hear you’ve been using OpenSRE in your SRE workflows.
We’re adding a lot more product improvements and features soon, so excited to have Requesty support in the project.
I'll take this branch from here.
|
🚀 Houston, we have a merge. @Thibault00 your PR is in orbit. Thanks for launching this one! 👋 Join us on Discord - OpenSRE : hang out, contribute, or hunt for features and issues. Everyone's welcome. |

Describe the changes you have made in this PR -
Hey! We're Requesty, an OpenAI-compatible LLM gateway (300+ models, fallback routing, caching, cost optimization). We've been using OpenSRE for our SRE workflows and wanted to add native support for routing LLM calls through Requesty.
The integration follows the exact same pattern as OpenRouter since we're also OpenAI-compatible. Every request sent through Requesty includes an
X-Title: OpenSREheader so traffic is identifiable on both sides. Would love to explore a deeper connection between the two projects if there's interest!What changed:
app/config.py— added Requesty model constants (anthropic/claude-sonnet-4-6for both reasoning and toolcall), base URL,LLMModelConfig,LLMProviderliteral,LLMSettingsfields, validator, andfrom_env()env var resolutionapp/services/llm_client.py— addeddefault_headersparam toOpenAILLMClient(forwarded to the OpenAI constructor), and a newelif provider == "requesty"branch that passesX-Title: OpenSREapp/cli/wizard/config.py— addedREQUESTY_MODELS(5 curated models: Claude Opus 4.7, Claude Sonnet 4.6, GPT-5.5, Gemini 3.1 Pro, Gemini 3.1 Flash-Lite) and aProviderOptionentry inSUPPORTED_PROVIDERSapp/cli/wizard/validation.py— added base URL resolution for credential validationapp/cli/commands/doctor.py—opensre healthnow recognizesREQUESTY_API_KEYapp/cli/interactive_shell/agent_actions.py— runtime provider switching support.env.example— documentedREQUESTY_API_KEY,REQUESTY_REASONING_MODEL,REQUESTY_TOOLCALL_MODELDemo/Screenshot for feature changes and bug fixes -
Local checks on changed files all pass:
Code Understanding and AI Usage
Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?
If you used AI assistance:
Explain your implementation approach:
Requesty is OpenAI-compatible so it slots right into the existing
OpenAILLMClientpath, same as OpenRouter, Gemini, and NVIDIA. The only new concept is thedefault_headersparameter onOpenAILLMClientwhich lets us passX-Title: OpenSREwithout touching other providers. The OpenAI Python SDK already supportsdefault_headersnatively so this is just threading it through.Followed the exact same pattern as the OpenRouter integration across all 7 files: constants, config model, client factory, wizard, validation, doctor check, and interactive shell. No new abstractions, no new dependencies.
Checklist before requesting a review
Note: Please check Allow edits from maintainers if you would like us to assist in the PR.