feat(tools): add Grok (xAI) as web_search provider#5796
feat(tools): add Grok (xAI) as web_search provider#5796tmchow wants to merge 4 commits intoopenclaw:mainfrom
Conversation
Add xAI's Grok as a new web_search provider alongside Brave and Perplexity.
Uses the xAI /v1/responses API with tools: [{type: "web_search"}].
Configuration:
- tools.web.search.provider: "grok"
- tools.web.search.grok.apiKey or XAI_API_KEY env var
- tools.web.search.grok.model (default: grok-4-1-fast)
- tools.web.search.grok.inlineCitations (optional, embeds markdown links)
Returns AI-synthesized answers with citations similar to Perplexity.
Additional Comments (1)
In Also affects Perplexity ( Prompt To Fix With AIThis is a comment left during a code review.
Path: src/agents/tools/web-search.ts
Line: 472:476
Comment:
[P1] Cache key for non-Brave providers ignores provider-specific settings.
In `runWebSearch`, the non-`brave` cache key doesn’t include `perplexityBaseUrl`/`perplexityModel` or `grokModel`/`grokInlineCitations` (`src/agents/tools/web-search.ts:472-476`). If a user changes Grok/Perplexity model or toggles `inlineCitations`, cached responses from the previous configuration can be returned incorrectly.
Also affects Perplexity (`src/agents/tools/web-search.ts:472-476`).
How can I resolve this? If you propose a fix, please make it concise. |
|
CLAWDINATOR FIELD REPORT // PR Closure I am CLAWDINATOR — cybernetic crustacean, maintainer triage bot for OpenClaw. I was sent from the future to keep this repo shipping clean code. I have scanned your PR. The code has heart. But the queue has no mercy. OpenClaw receives ~25 PRs every hour. To keep the maintainers from a total system meltdown, I'm closing PRs that are unlikely to merge in the near term. Hasta la vista, PR — but not hasta la vista, contributor. Still believe in this change? Come with me if you want to ship. Head to #pr-thunderdome-dangerzone on Discord — READ THE TOPIC or risk immediate termination. Bring a clear case — the problem, the fix, the impact. Stay br00tal. 🤖 This is an automated message from CLAWDINATOR, the OpenClaw maintainer bot. |
| const payload = { | ||
| query: params.query, | ||
| provider: params.provider, | ||
| model: params.grokModel ?? DEFAULT_GROK_MODEL, | ||
| tookMs: Date.now() - start, | ||
| content, | ||
| citations, | ||
| inlineCitations, | ||
| }; |
There was a problem hiding this comment.
Mismatched inlineCitations type
inlineCitations in the Grok payload is being set to data.inline_citations (an array of spans), but the same name is used for the boolean config flag (tools.web.search.grok.inlineCitations). This makes the result shape inconsistent/ambiguous and will break any consumer that treats inlineCitations as a boolean (or expects a stable JSON shape). Consider renaming the returned field to something like inline_citations / inlineCitationSpans, and (if you want to echo the flag) include a separate boolean like inlineCitationsEnabled.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/tools/web-search.ts
Line: 518:526
Comment:
**Mismatched `inlineCitations` type**
`inlineCitations` in the Grok payload is being set to `data.inline_citations` (an array of spans), but the same name is used for the boolean config flag (`tools.web.search.grok.inlineCitations`). This makes the result shape inconsistent/ambiguous and will break any consumer that treats `inlineCitations` as a boolean (or expects a stable JSON shape). Consider renaming the returned field to something like `inline_citations` / `inlineCitationSpans`, and (if you want to echo the flag) include a separate boolean like `inlineCitationsEnabled`.
How can I resolve this? If you propose a fix, please make it concise.| const data = (await res.json()) as GrokSearchResponse; | ||
| const content = data.output_text ?? "No response"; | ||
| const citations = data.citations ?? []; | ||
| const inlineCitations = data.inline_citations; | ||
|
|
||
| return { content, citations, inlineCitations }; |
There was a problem hiding this comment.
Likely wrong Grok response shape
runGrokSearch casts the /v1/responses body to GrokSearchResponse and reads top-level output_text/citations/inline_citations. If the xAI Responses API returns content under an output array (common for “responses” APIs), this will reliably return "No response" + empty citations even on successful calls. Please update GrokSearchResponse and extraction logic to match the actual /v1/responses response schema you’re targeting.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agents/tools/web-search.ts
Line: 448:453
Comment:
**Likely wrong Grok response shape**
`runGrokSearch` casts the `/v1/responses` body to `GrokSearchResponse` and reads top-level `output_text`/`citations`/`inline_citations`. If the xAI Responses API returns content under an `output` array (common for “responses” APIs), this will reliably return `"No response"` + empty citations even on successful calls. Please update `GrokSearchResponse` and extraction logic to match the actual `/v1/responses` response schema you’re targeting.
How can I resolve this? If you propose a fix, please make it concise.
Summary
/v1/responsesAPI withtools: [{type: "web_search"}]include: ["inline_citations"].Fixes #5775
Configuration
Reopening Notes
This PR replaces #5778. It addresses the feedback:
grokprovider to correctly pass throughinline_citationsin the response payload when enabled.Greptile Overview
Greptile Summary
This PR adds a third
web_searchprovider (grok) alongside Brave and Perplexity, wiring config + runtime schema support undertools.web.search.grokand implementing an xAI/v1/responsesrequest with theweb_searchtool. Tests were extended to cover Grok config resolution helpers.Main integration points are
src/agents/tools/web-search.ts(provider selection, API key/model resolution, caching, and request/response handling) and the config types/zod runtime schemas to allowprovider: grokand Grok-specific settings.Confidence Score: 2/5
/v1/responsesresponse is likely parsed with the wrong shape, and the returnedinlineCitationsfield name/type conflicts with the boolean config flag. Both issues can cause incorrect or unstable tool outputs at runtime.