Skip to content

feat: add Requesty as LLM provider#1245

Merged
VaibhavUpreti merged 10 commits intoTracer-Cloud:mainfrom
Thibault00:feat/add-requesty-llm-provider
May 3, 2026
Merged

feat: add Requesty as LLM provider#1245
VaibhavUpreti merged 10 commits intoTracer-Cloud:mainfrom
Thibault00:feat/add-requesty-llm-provider

Conversation

@Thibault00
Copy link
Copy Markdown
Contributor

@Thibault00 Thibault00 commented May 3, 2026

Describe the changes you have made in this PR -

Hey! We're Requesty, an OpenAI-compatible LLM gateway (300+ models, fallback routing, caching, cost optimization). We've been using OpenSRE for our SRE workflows and wanted to add native support for routing LLM calls through Requesty.

The integration follows the exact same pattern as OpenRouter since we're also OpenAI-compatible. Every request sent through Requesty includes an X-Title: OpenSRE header so traffic is identifiable on both sides. Would love to explore a deeper connection between the two projects if there's interest!

What changed:

  • app/config.py — added Requesty model constants (anthropic/claude-sonnet-4-6 for both reasoning and toolcall), base URL, LLMModelConfig, LLMProvider literal, LLMSettings fields, validator, and from_env() env var resolution
  • app/services/llm_client.py — added default_headers param to OpenAILLMClient (forwarded to the OpenAI constructor), and a new elif provider == "requesty" branch that passes X-Title: OpenSRE
  • app/cli/wizard/config.py — added REQUESTY_MODELS (5 curated models: Claude Opus 4.7, Claude Sonnet 4.6, GPT-5.5, Gemini 3.1 Pro, Gemini 3.1 Flash-Lite) and a ProviderOption entry in SUPPORTED_PROVIDERS
  • app/cli/wizard/validation.py — added base URL resolution for credential validation
  • app/cli/commands/doctor.pyopensre health now recognizes REQUESTY_API_KEY
  • app/cli/interactive_shell/agent_actions.py — runtime provider switching support
  • .env.example — documented REQUESTY_API_KEY, REQUESTY_REASONING_MODEL, REQUESTY_TOOLCALL_MODEL

Demo/Screenshot for feature changes and bug fixes -

$ REQUESTY_API_KEY=test-key LLM_PROVIDER=requesty python3 -c "
from app.config import REQUESTY_BASE_URL, REQUESTY_LLM_CONFIG, LLMSettings
s = LLMSettings.from_env()
print(f'Provider: {s.provider}')
print(f'Reasoning: {s.requesty_reasoning_model}')
print(f'Toolcall: {s.requesty_toolcall_model}')
print(f'Base URL: {REQUESTY_BASE_URL}')
"
Provider: requesty
Reasoning: anthropic/claude-sonnet-4-6
Toolcall: anthropic/claude-sonnet-4-6
Base URL: https://router.requesty.ai/v1
image

Local checks on changed files all pass:

ruff check (6 changed files)  -> All checks passed!
ruff format --check            -> 6 files already formatted
mypy                           -> 0 errors in changed files
pytest                         -> 3815 passed

Code Understanding and AI Usage

Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?

  • No, I wrote all the code myself
  • Yes, I used AI assistance (continue below)

If you used AI assistance:

  • I have reviewed every single line of the AI-generated code
  • I can explain the purpose and logic of each function/component I added
  • I have tested edge cases and understand how the code handles them
  • I have modified the AI output to follow this project's coding standards and conventions

Explain your implementation approach:

Requesty is OpenAI-compatible so it slots right into the existing OpenAILLMClient path, same as OpenRouter, Gemini, and NVIDIA. The only new concept is the default_headers parameter on OpenAILLMClient which lets us pass X-Title: OpenSRE without touching other providers. The OpenAI Python SDK already supports default_headers natively so this is just threading it through.

Followed the exact same pattern as the OpenRouter integration across all 7 files: constants, config model, client factory, wizard, validation, doctor check, and interactive shell. No new abstractions, no new dependencies.


Checklist before requesting a review

  • I have added proper PR title and linked to the issue
  • I have performed a self-review of my code
  • I can explain the purpose of every function, class, and logic block I added
  • I understand why my changes work and have tested them thoroughly
  • I have considered potential edge cases and how my code handles them
  • If it is a core feature, I have added thorough tests
  • My code follows the project's style guidelines and conventions

Note: Please check Allow edits from maintainers if you would like us to assist in the PR.

Thibault00 added 9 commits May 3, 2026 16:08
Add Requesty as an LLM provider option in config:
- Model constants (anthropic/claude-sonnet-4-6 reasoning, anthropic/claude-haiku-4-5 toolcall)
- Base URL (https://router.requesty.ai/v1)
- LLMModelConfig, LLMProvider Literal, LLMSettings fields
Both default to anthropic/claude-sonnet-4-6.
Accept optional default_headers dict and forward it to the OpenAI
constructor. This enables per-provider custom headers like X-Title.
Route LLM_PROVIDER=requesty through OpenAILLMClient with
router.requesty.ai/v1 base URL and X-Title: OpenSRE header.
Add REQUESTY_MODELS model catalog, ProviderOption entry in
SUPPORTED_PROVIDERS, and validation base URL resolution.
Register REQUESTY_API_KEY in health check and add requesty to
the interactive shell provider name set.
Add requesty_api_key to provider_to_key and env_var maps in
the model validator, and add env var resolution in from_env().
@Thibault00 Thibault00 changed the title Feat/add requesty llm provider feat: add Requesty as LLM provider May 3, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented May 3, 2026

Greptile Summary

This PR adds Requesty as a new OpenAI-compatible LLM gateway provider, following the same integration pattern used by OpenRouter, Gemini, and NVIDIA. Changes span config constants, LLMSettings fields, from_env resolution, the setup wizard, validation, doctor command, and OpenAILLMClient (which gains a default_headers parameter used to send X-Title: OpenSRE to Requesty).

Confidence Score: 4/5

Safe to merge with minor cleanup; no runtime-breaking issues found.

Only P2 findings: a duplicate label in the wizard model list and a misplaced dict entry in from_env. Core integration logic is consistent with existing provider patterns.

app/cli/wizard/config.py — duplicate "Claude Sonnet 4.6 (via Requesty)" label in REQUESTY_MODELS.

Important Files Changed

Filename Overview
app/cli/wizard/config.py Adds REQUESTY_MODELS and ProviderOption; two entries in REQUESTY_MODELS share the same "Claude Sonnet 4.6 (via Requesty)" label with different underlying model paths.
app/config.py Adds Requesty constants, LLMSettings fields, and from_env resolution — structurally mirrors other providers, but requesty_api_key is placed out of order with other API-key entries.
app/services/llm_client.py Adds default_headers support to OpenAILLMClient and wires Requesty provider with X-Title header; consistent with existing provider patterns.
app/cli/wizard/validation.py Adds Requesty base URL lookup to _get_provider_base_url, consistent with other provider branches.
app/cli/commands/doctor.py Adds requesty → REQUESTY_API_KEY to the provider health-check map.
app/cli/interactive_shell/agent_actions.py Adds "requesty" to the frozen set of known LLM provider names.
.env.example Documents REQUESTY_API_KEY, REQUESTY_REASONING_MODEL, and REQUESTY_TOOLCALL_MODEL env vars.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[LLM_PROVIDER=requesty] --> B[LLMSettings.from_env]
    B --> C{provider == requesty?}
    C -->|Yes| D[_create_llm_client]
    D --> E[Load REQUESTY_LLM_CONFIG & REQUESTY_BASE_URL]
    E --> F[OpenAILLMClient\nbase_url=router.requesty.ai/v1\ndefault_headers: X-Title OpenSRE]
    F --> G[OpenAI SDK to Requesty Gateway]
    G --> H{Route by model prefix}
    H --> I[anthropic/ to Claude]
    H --> J[bedrock/ to AWS Bedrock]
    H --> K[openai/ to GPT]
    H --> L[vertex/ to Gemini]
Loading

Reviews (1): Last reviewed commit: "fix: add Requesty to LLMSettings validat..." | Re-trigger Greptile

Comment thread app/cli/wizard/config.py Outdated
Comment thread app/config.py Outdated
- Disambiguate duplicate "Claude Sonnet 4.6" labels in REQUESTY_MODELS
  by adding "Bedrock" qualifier to bedrock/ model entries
- Move requesty_api_key to the API key group in from_env() for
  consistency with other providers
@gitsofaryan
Copy link
Copy Markdown
Contributor

@Thibault00 CI is failing and please address greptile issues!

Copy link
Copy Markdown
Member

@VaibhavUpreti VaibhavUpreti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome @Thaibat - really appreciate the PR! Great to hear you’ve been using OpenSRE in your SRE workflows.

We’re adding a lot more product improvements and features soon, so excited to have Requesty support in the project.

I'll take this branch from here.

@VaibhavUpreti VaibhavUpreti merged commit ad057f9 into Tracer-Cloud:main May 3, 2026
9 of 10 checks passed
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 3, 2026

🚀 Houston, we have a merge. @Thibault00 your PR is in orbit. Thanks for launching this one!


👋 Join us on Discord - OpenSRE : hang out, contribute, or hunt for features and issues. Everyone's welcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants