Skip to content

feat(tools): add Brave llm-context mode for web_search#39906

Merged
steipete merged 2 commits intomainfrom
feat/brave-llm-context-api
Mar 8, 2026
Merged

feat(tools): add Brave llm-context mode for web_search#39906
steipete merged 2 commits intomainfrom
feat/brave-llm-context-api

Conversation

@steipete
Copy link
Copy Markdown
Contributor

@steipete steipete commented Mar 8, 2026

Summary

  • add tools.web.search.brave.mode (web | llm-context) and wire it through config types/schema/help/labels
  • add Brave LLM Context mode to web_search using /res/v1/llm/context
  • reject unsupported ui_lang, freshness, and date_after/date_before filters in llm-context mode
  • add regression/config/docs coverage for the new mode

Linked Issues

Validation

  • pnpm check
  • pnpm build
  • pnpm test

@cursor
Copy link
Copy Markdown

cursor bot commented Mar 8, 2026

PR Summary

Medium Risk
Adds a new Brave search execution path hitting a different API endpoint and returning different payload shape, plus new validation rules for supported filters. Risk is mainly around behavior changes for Brave users and caching/parameter handling, not security-critical logic.

Overview
Adds an opt-in tools.web.search.brave.mode setting (default web) to let web_search call Brave’s /res/v1/llm/context endpoint and return extracted grounding snippets plus sources metadata.

When mode: "llm-context" is enabled, web_search now uses a dedicated Brave request/response mapping and cache key, updates the tool description/output to include mode, and rejects unsupported parameters (ui_lang, freshness, date_after, date_before). Docs, config schema/types/help/labels, tests, and the changelog are updated to cover the new mode.

Written by Cursor Bugbot for commit 2600331. This will update automatically on new commits. Configure here.

@openclaw-barnacle openclaw-barnacle bot added docs Improvements or additions to documentation agents Agent runtime and tooling size: M maintainer Maintainer-authored PR labels Mar 8, 2026
thirumaleshp and others added 2 commits March 8, 2026 13:56
Add support for Brave's LLM Context API endpoint (/res/v1/llm/context)
as an optional mode for the web_search tool. When configured with
tools.web.search.brave.mode set to llm-context, the tool returns
pre-extracted page content optimized for LLM grounding instead of
standard URL/snippet results.

The llm-context cache key excludes count and ui_lang parameters that
the LLM Context API does not accept, preventing unnecessary cache
misses.

Closes #14992

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@steipete steipete force-pushed the feat/brave-llm-context-api branch from 8e643c7 to 2600331 Compare March 8, 2026 13:56
@steipete steipete merged commit acac7e3 into main Mar 8, 2026
13 of 14 checks passed
@steipete steipete deleted the feat/brave-llm-context-api branch March 8, 2026 13:57
@steipete
Copy link
Copy Markdown
Contributor Author

steipete commented Mar 8, 2026

Landed via temp rebase onto main.

  • Gate: pnpm check, pnpm build, pnpm test
  • Land commit: 2600331
  • Merge commit: acac7e3

Thanks @thirumaleshp!

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Mar 8, 2026

Greptile Summary

This PR adds an opt-in llm-context mode for Brave Search, routing web_search calls to Brave's /res/v1/llm/context endpoint and returning pre-extracted page chunks instead of standard web snippets. The feature is well-integrated: config types, Zod schema, help text, labels, docs, and a thorough test suite covering endpoint routing, parameter passing, and all unsupported-parameter rejections are all present.

Key changes:

  • New BRAVE_LLM_CONTEXT_ENDPOINT constant and runBraveLlmContextSearch internal function that maps Brave's grounding.generic results to the shared result shape.
  • resolveBraveConfig / resolveBraveMode helpers; braveMode is wired into createWebSearchTool and passed down to runWebSearch.
  • Tool-level guards reject ui_lang, freshness, and date_after/date_before when braveMode === "llm-context".
  • Cache key for llm-context mode is correctly scoped (omits count, maxTokens, and other inapplicable parameters).
  • Minor code-quality note: runBraveLlmContextSearch still accepts a freshness parameter and forwards it to the API despite this being documented as unsupported. The tool-handler guard prevents this code path from executing in practice, but removing the parameter from the function signature would make the contract explicit and prevent potential future confusion.

Confidence Score: 4/5

  • This PR is safe to merge; the new llm-context mode is correctly gated with tool-level parameter validation and all unsupported parameters are rejected before reaching the API.
  • The implementation is well-structured with solid test coverage for the happy path, all three unsupported-parameter rejections, and config validation. The only concern is a minor code-quality inconsistency where runBraveLlmContextSearch accepts and forwards a freshness parameter despite this being unsupported. The tool-handler guard prevents this code path from executing in practice, so there is no user-facing risk, but fixing the function signature would improve code clarity and prevent potential future confusion.
  • src/agents/tools/web-search.ts — the freshness parameter in runBraveLlmContextSearch should be removed to match the documented API contract.

Last reviewed commit: 2600331

Comment on lines +1243 to +1275
async function runBraveLlmContextSearch(params: {
query: string;
apiKey: string;
timeoutSeconds: number;
country?: string;
search_lang?: string;
freshness?: string;
}): Promise<{
results: Array<{
url: string;
title: string;
snippets: string[];
siteName?: string;
}>;
sources?: BraveLlmContextResponse["sources"];
}> {
const url = new URL(BRAVE_LLM_CONTEXT_ENDPOINT);
url.searchParams.set("q", params.query);
if (params.country) {
url.searchParams.set("country", params.country);
}
if (params.search_lang) {
url.searchParams.set("search_lang", params.search_lang);
}
if (params.freshness) {
url.searchParams.set("freshness", params.freshness);
}

return withTrustedWebSearchEndpoint(
{
url: url.toString(),
timeoutSeconds: params.timeoutSeconds,
init: {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

runBraveLlmContextSearch declares freshness?: string in its parameter signature (line 1249) and forwards it to the Brave LLM Context endpoint (lines 1267–1269), even though the /res/v1/llm/context API doesn't support this parameter. The tool-level guard in createWebSearchTool (line 1701) prevents freshness from ever being non-undefined by the time runBraveLlmContextSearch is called in llm-context mode, so there's no user-facing bug today. However, the function signature misleadingly suggests the parameter is supported, which could cause confusion in future refactoring.

To make the API contract explicit and prevent accidental misuse, consider removing the freshness parameter from the function signature:

Suggested change
async function runBraveLlmContextSearch(params: {
query: string;
apiKey: string;
timeoutSeconds: number;
country?: string;
search_lang?: string;
freshness?: string;
}): Promise<{
results: Array<{
url: string;
title: string;
snippets: string[];
siteName?: string;
}>;
sources?: BraveLlmContextResponse["sources"];
}> {
const url = new URL(BRAVE_LLM_CONTEXT_ENDPOINT);
url.searchParams.set("q", params.query);
if (params.country) {
url.searchParams.set("country", params.country);
}
if (params.search_lang) {
url.searchParams.set("search_lang", params.search_lang);
}
if (params.freshness) {
url.searchParams.set("freshness", params.freshness);
}
return withTrustedWebSearchEndpoint(
{
url: url.toString(),
timeoutSeconds: params.timeoutSeconds,
init: {
async function runBraveLlmContextSearch(params: {
query: string;
apiKey: string;
timeoutSeconds: number;
country?: string;
search_lang?: string;
}): Promise<{

Then remove the corresponding if (params.freshness) block (lines 1267–1269) and drop the freshness: params.freshness argument at the call site (line 1474).

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/tools/web-search.ts
Line: 1243-1275

Comment:
`runBraveLlmContextSearch` declares `freshness?: string` in its parameter signature (line 1249) and forwards it to the Brave LLM Context endpoint (lines 1267–1269), even though the `/res/v1/llm/context` API doesn't support this parameter. The tool-level guard in `createWebSearchTool` (line 1701) prevents `freshness` from ever being non-`undefined` by the time `runBraveLlmContextSearch` is called in llm-context mode, so there's no user-facing bug today. However, the function signature misleadingly suggests the parameter is supported, which could cause confusion in future refactoring.

To make the API contract explicit and prevent accidental misuse, consider removing the `freshness` parameter from the function signature:

```suggestion
async function runBraveLlmContextSearch(params: {
  query: string;
  apiKey: string;
  timeoutSeconds: number;
  country?: string;
  search_lang?: string;
}): Promise<{
```

Then remove the corresponding `if (params.freshness)` block (lines 1267–1269) and drop the `freshness: params.freshness` argument at the call site (line 1474).

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8e643c7a1d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +1477 to +1480
const mapped = llmResults.map((entry) => ({
title: entry.title ? wrapWebContent(entry.title, "web_search") : "",
url: entry.url,
snippets: entry.snippets.map((s) => wrapWebContent(s, "web_search")),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Respect count limit in Brave llm-context mode

The new llm-context branch ignores the caller’s count constraint even though count is part of the web_search contract. In this path all upstream results are returned (llmResults.map(...)) and count is reported as mapped.length, so a request like count: 1 can still return many extracted chunks, increasing response size and token/cost usage unexpectedly. Apply params.count in this mode (either pass it to Brave if supported, or locally truncate before building the payload).

Useful? React with 👍 / 👎.

Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

`${params.provider}:${params.query}:${params.count}:${params.country || "default"}:${params.search_lang || params.language || "default"}:${params.ui_lang || "default"}:${params.freshness || "default"}:${params.dateAfter || "default"}:${params.dateBefore || "default"}:${params.searchDomainFilter?.join(",") || "default"}:${params.maxTokens || "default"}:${params.maxTokensPerPage || "default"}:${providerSpecificKey}`,
params.provider === "brave" && effectiveBraveMode === "llm-context"
? `${params.provider}:llm-context:${params.query}:${params.country || "default"}:${params.search_lang || params.language || "default"}:${params.freshness || "default"}`
: `${params.provider}:${effectiveBraveMode}:${params.query}:${params.count}:${params.country || "default"}:${params.search_lang || params.language || "default"}:${params.ui_lang || "default"}:${params.freshness || "default"}:${params.dateAfter || "default"}:${params.dateBefore || "default"}:${params.searchDomainFilter?.join(",") || "default"}:${params.maxTokens || "default"}:${params.maxTokensPerPage || "default"}:${providerSpecificKey}`,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cache key leaks brave mode into non-brave providers

Low Severity

The else branch of the cache key ternary embeds ${effectiveBraveMode} (always "web") into cache keys for every provider — perplexity, grok, gemini, kimi, and brave-web — even though the brave mode concept is meaningless for non-brave providers. This silently changes the key format from e.g. perplexity:<query>:… to perplexity:web:<query>:… for all providers.

Fix in Cursor Fix in Web

}
if (params.freshness) {
url.searchParams.set("freshness", params.freshness);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dead freshness parameter in LLM context function

Low Severity

runBraveLlmContextSearch accepts a freshness parameter and would pass it to the Brave LLM Context API URL, but the validation layer in execute always rejects freshness before this function is reached. This makes the parameter, the if (params.freshness) guard, and the freshness: params.freshness call-site argument all dead code. The function's interface falsely implies freshness is supported, which could mislead future callers.

Additional Locations (1)

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling docs Improvements or additions to documentation maintainer Maintainer-authored PR size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature: Support Brave Search LLM Context API as web_search provider option

2 participants