Skip to content

Google Vertex AI / Google AI providers missing cachePoint annotations #17568

@ccompton-merge

Description

@ccompton-merge

Description

applyCaching() in transform.ts sets cache hints for Anthropic, Bedrock, OpenRouter, Copilot, and OpenAI-compatible providers, but skips Google Vertex AI and Google AI entirely. This means Gemini models via @ai-sdk/google-vertex or @ai-sdk/google never get the AI SDK's cachePoint annotation, so implicit context caching can't kick in for the system prompt prefix.

The SDK already supports cachePoint for these providers — it just needs to be set.

Impact: ~5-15K tokens of system prompt + tool declarations re-billed as new input on every request. Gemini charges 75% less for cached tokens, so this is meaningful for heavy users.

Reproduction

  1. Configure opencode with a Google Vertex AI or Google AI provider
  2. Send a message that includes the system prompt
  3. Observe that no cachePoint annotation is set on the system message
  4. Compare with Anthropic/Bedrock providers which DO get cache annotations

Expected Behavior

Google Vertex AI and Google AI providers should receive google: { cachePoint: { type: "default" } } annotations on the system prompt, matching the pattern used by Anthropic (cacheControl) and Bedrock (cachePoint).

Environment

  • opencode version: latest main
  • Provider: @ai-sdk/google-vertex and @ai-sdk/google
  • File: packages/opencode/src/provider/transform.ts

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions