Skip to content

feat(tools): add Brave LLM Context API mode for web_search#16312

Closed
RoccoFortuna wants to merge 3 commits intoopenclaw:mainfrom
RoccoFortuna:feat/brave-llm-context-api
Closed

feat(tools): add Brave LLM Context API mode for web_search#16312
RoccoFortuna wants to merge 3 commits intoopenclaw:mainfrom
RoccoFortuna:feat/brave-llm-context-api

Conversation

@RoccoFortuna
Copy link
Copy Markdown

@RoccoFortuna RoccoFortuna commented Feb 14, 2026

Summary

  • Problem: Brave offers an LLM Context API (/res/v1/llm/context) that returns pre-extracted, relevance-scored web content optimized for LLM consumption, but OpenClaw only supports the standard web search endpoint.
  • Why it matters: The LLM Context API returns full text snippets, tables, and code blocks instead of just titles/descriptions, which is significantly better for agent grounding.
  • What changed: Added brave.mode and brave.llmContext config, a new runBraveLlmContextSearch() function, and llm-context branches in runWebSearch()/createWebSearchTool().
  • What did NOT change: Standard Brave web search, Perplexity, and Grok providers are untouched. No brave config block = existing behavior, zero breaking changes.

AI-assisted (Claude Code). Fully tested locally (unit, integration, and live API). I understand what the code does.

This is my first PR on the repo - any feedback on code style, structure, or approach is very welcome. Happy to iterate!

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

User-visible / Behavior Changes

  • New config keys: tools.web.search.brave.mode ("web" | "llm-context") and tools.web.search.brave.llmContext.* (maxTokens, maxUrls, thresholdMode, maxSnippets, maxTokensPerUrl, maxSnippetsPerUrl).
  • freshness parameter returns an error when used in llm-context mode (unsupported by the endpoint).
  • Result format in llm-context mode includes content (joined snippets) instead of description, plus mode, sourceCount fields.
  • Switching modes requires a gateway restart (consistent with how all provider configs are resolved).

Security Impact (required)

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? No - reuses existing BRAVE_API_KEY / tools.web.search.apiKey
  • New/changed network calls? Yes - new GET requests to https://api.search.brave.com/res/v1/llm/context
  • Command/tool execution surface changed? No - same web_search tool, same parameters
  • Data access scope changed? No
  • Risk + mitigation: The new endpoint is on the same Brave API domain, uses the same auth header (X-Subscription-Token), and all response content is wrapped with wrapWebContent() (matching the existing security pattern for titles and snippet content).

Repro + Verification

Environment

  • OS: macOS
  • Runtime: Node 20 (pnpm)
  • Config: tools.web.search.provider: "brave" with brave.mode: "llm-context"

Steps

  1. Set tools.web.search.brave.mode: "llm-context" in config
  2. Invoke web_search tool with a query
  3. Observe response contains mode: "llm-context", content fields with joined snippets

Expected

  • LLM Context API is called, results contain full text snippets

Actual

  • Matches expected

Evidence

  • Failing test/log before + passing after
  • Trace/log snippets
  • Live API verification
pnpm vitest run --config vitest.e2e.config.ts src/agents/tools/web-search.e2e.test.ts
 ✓ src/agents/tools/web-search.e2e.test.ts (37 tests) 7ms
 Test Files  1 passed (1)
      Tests  37 passed (37)

pnpm vitest run --config vitest.e2e.config.ts src/agents/tools/web-tools.enabled-defaults.e2e.test.ts
 ✓ src/agents/tools/web-tools.enabled-defaults.e2e.test.ts (27 tests) 13ms
 Test Files  1 passed (1)
      Tests  27 passed (27)

pnpm vitest run src/config/config.web-search-provider.test.ts
 ✓ src/config/config.web-search-provider.test.ts (7 tests) 7ms
 Test Files  1 passed (1)
      Tests  7 passed (7)

pnpm check (format + tsgo + lint) - all pass (tsgo error in discord/monitor/gateway-plugin.ts is pre-existing)

Live API tested with Brave Search subscription key:

  • Web mode: standard snippets, freshness accepted
  • LLM-context mode: full extracted content, freshness correctly rejected
  • llmContext.maxTokens tuning: visibly shorter output with lower values

Human Verification (required)

  • Verified scenarios: Config parsing with valid/invalid brave config, resolver functions with undefined/empty/full config, cache key differentiation between web and llm-context modes, live API calls in both modes.
  • Edge cases checked: Missing brave config block (defaults to web), freshness rejection in llm-context mode, strict mode Zod validation rejecting unknown keys, maxTokens range validation (below min / above max), mode switching via config + restart.
  • What I did NOT verify: All permutations of llmContext sub-params against live API (only tested maxTokens).

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? Yes - new optional config keys under tools.web.search.brave.*
  • Migration needed? No - no config = existing behavior unchanged

Failure Recovery (if this breaks)

  • How to disable/revert: Remove tools.web.search.brave config block, or set brave.mode: "web" to revert to standard Brave web search.
  • Files/config to restore: Only user config needs changing.
  • Bad symptoms: If the LLM Context API returns unexpected response shapes, the results array will be empty (graceful degradation via optional chaining and fallback defaults).

Risks and Mitigations

  • Risk: Brave LLM Context API response format changes in the future.
    • Mitigation: Response parsing uses defensive optional chaining and defaults; empty/missing fields produce empty results rather than errors.

Configuration

tools:
  web:
    search:
      provider: brave
      brave:
        mode: llm-context       # or "web" (the default when omitted)
        llmContext:
          maxTokens: 16384       # 1024-32768, default 8192
          maxUrls: 10            # 1-50, default 20
          thresholdMode: strict  # strict | balanced | lenient | disabled
          maxSnippets: 50        # 1-100
          maxTokensPerUrl: 4096  # 512-8192
          maxSnippetsPerUrl: 5   # 1-100

@openclaw-barnacle openclaw-barnacle bot added agents Agent runtime and tooling size: M labels Feb 14, 2026
Add support for Brave's /res/v1/llm/context endpoint as a configurable
mode within the existing Brave web search provider. The LLM Context API
returns pre-extracted, relevance-scored web content (full text snippets,
tables, code blocks) optimized for LLM consumption.

Configuration:
- tools.web.search.brave.mode: "llm-context" (default: "web")
- tools.web.search.brave.llmContext.maxTokens (1024-32768)
- tools.web.search.brave.llmContext.maxUrls (1-50)
- tools.web.search.brave.llmContext.thresholdMode (strict/balanced/lenient/disabled)
- tools.web.search.brave.llmContext.maxSnippets (1-100)
- tools.web.search.brave.llmContext.maxTokensPerUrl (512-8192)
- tools.web.search.brave.llmContext.maxSnippetsPerUrl (1-100)

Uses the same API key as standard Brave search. Freshness param is
rejected in llm-context mode (unsupported by the endpoint).

Closes openclaw#14992
@steipete steipete closed this Feb 16, 2026
@steipete steipete reopened this Feb 17, 2026
@cgdusek

This comment was marked as spam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling size: L

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature: Support Brave Search LLM Context API as web_search provider option

3 participants