Skip to content

Extract TokenCache utility, add caching to GitHubTokenVerifier#3547

Merged
jlowin merged 3 commits intomainfrom
fix/token-cache-github-verifier
Mar 18, 2026
Merged

Extract TokenCache utility, add caching to GitHubTokenVerifier#3547
jlowin merged 3 commits intomainfrom
fix/token-cache-github-verifier

Conversation

@jlowin
Copy link
Copy Markdown
Member

@jlowin jlowin commented Mar 18, 2026

GitHubTokenVerifier makes two uncached HTTP calls to GitHub's API on every authenticated request (~600ms round-trip), while IntrospectionTokenVerifier has had full caching support since #3298. Rather than duplicate the ~80 lines of caching machinery, this extracts it into a shared TokenCache utility that both verifiers now compose.

TokenCache provides SHA-256 key hashing, TTL with token-expiry awareness, bounded size with FIFO eviction, periodic cleanup, and defensive deep copies — the same behavior IntrospectionTokenVerifier had inline, now reusable. IntrospectionTokenVerifier is refactored to use it (pure refactor, no behavioral change), and GitHubTokenVerifier gains cache_ttl_seconds and max_cache_size parameters with caching disabled by default.

from fastmcp.server.auth.providers.github import GitHubProvider

provider = GitHubProvider(
    client_id="...",
    client_secret="...",
    cache_ttl_seconds=300,  # cache verified tokens for 5 minutes
)

Closes #3543

@jlowin jlowin added the enhancement Improvement to existing functionality. For issues and smaller PR improvements. label Mar 18, 2026
@marvin-context-protocol marvin-context-protocol Bot added auth Related to authentication (Bearer, JWT, OAuth, WorkOS) for client or server. server Related to FastMCP server implementation or server-side functionality. labels Mar 18, 2026
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 381895ef89

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +77 to +82
if ttl_seconds is not None and ttl_seconds < 0:
raise ValueError(
f"cache_ttl_seconds must be non-negative, got {ttl_seconds}"
)
if max_size is not None and max_size < 0:
raise ValueError(f"max_cache_size must be non-negative, got {max_size}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve negative cache settings as disabled

IntrospectionTokenVerifier previously treated cache_ttl_seconds <= 0 and max_cache_size <= 0 as “cache disabled”, and PropelAuthProvider._create_token_verifier still forwards caller-provided overrides straight through. Raising here turns existing configs that use -1 as a sentinel to disable caching into startup-time ValueErrors instead of a no-op cache, which is a behavior regression for deployed introspection/PropelAuth setups.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a new API surface — GitHubTokenVerifier didn't have caching before, and IntrospectionTokenVerifier never documented -1 as a sentinel. Negative TTL is almost certainly a caller bug, and failing fast is clearer than silently disabling.

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

marvin-context-protocol Bot commented Mar 18, 2026

Test Failure Analysis

Summary: The Tests with lowest-direct dependencies job fails with an ImportError because Base64ImageSourceParam was introduced in anthropic>=0.48.0, but the current minimum version constraint is >=0.40.0.

Root Cause: PR #3550 added src/fastmcp/client/sampling/handlers/anthropic.py, which imports Base64ImageSourceParam from anthropic.types. This type doesn't exist in anthropic 0.40.0 (the floor set in pyproject.toml). The lowest-direct-deps CI job pins all extras to their minimum declared versions, so it installs anthropic==0.40.x, causing:

ImportError: cannot import name 'Base64ImageSourceParam' from 'anthropic.types'

Suggested Solution: PR #3553 (Bump anthropic minimum to 0.48.0) is already open and targets exactly this issue. Merging it will raise the floor to the version that introduced Base64ImageSourceParam and unblock this CI job. No changes are needed in this PR.

Detailed Analysis

Failure log:

ERROR collecting tests/client/sampling/handlers/test_anthropic_handler.py
ImportError while importing test module '...test_anthropic_handler.py'.
E   ImportError: cannot import name 'Base64ImageSourceParam' from 'anthropic.types'
      (/home/runner/work/fastmcp/fastmcp/.venv/lib/python3.10/site-packages/anthropic/types/__init__.py)
The above exception was the direct cause of the following exception:
E   ImportError: The `anthropic` package is not installed. Install it with `pip install fastmcp[anthropic]` or add `anthropic` to your dependencies.

Root file: src/fastmcp/client/sampling/handlers/anthropic.py:26 — imports Base64ImageSourceParam unconditionally from anthropic.types

Version constraint in pyproject.toml:

anthropic = ["anthropic>=0.40.0"]

Base64ImageSourceParam was introduced in anthropic 0.48.0. The lowest-direct test installs exactly 0.40.0.

Note: This comment was edited to reflect the latest workflow run (23263028302). Previous analysis covered an older run.

Related Files

🤖 Generated with Claude Code

jlowin added 2 commits March 18, 2026 12:23
Extract the caching machinery from IntrospectionTokenVerifier into a
shared TokenCache class in fastmcp.utilities.token_cache, then wire
it into both IntrospectionTokenVerifier and GitHubTokenVerifier.
@jlowin jlowin force-pushed the fix/token-cache-github-verifier branch from 381895e to fc9fa3a Compare March 18, 2026 16:33
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fc9fa3a8ba

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/fastmcp/utilities/token_cache.py Outdated
Comment on lines +128 to +131
self._maybe_cleanup()
self._enforce_size_limit()

cache_key = self._hash_token(token)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Compute cache_key before evicting from a full TokenCache

When the cache is already at max_size and set() is called for a token that is already present, _enforce_size_limit() evicts an unrelated entry before we know this write is only an overwrite. I verified with max_size=2 that set('C') twice leaves the cache with a single entry. In practice this happens under overlapping verify_token() calls for the same uncached token, so the new shared cache can silently lose capacity under concurrent auth traffic and deliver much less reuse than configured.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — fixed in a756e0f. The cache key is now computed before enforcing the size limit, and eviction is skipped for overwrites.

"github_user_data": user_data,
},
)
self._cache.set(token, result)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Skip caching when GitHub scope lookup is non-200

With caching enabled, a successful /user response followed by a non-200 /user/repos response still falls through to token_scopes = ['user'] and is cached here. That means a transient GitHub rate-limit/5xx on the scope probe becomes a positive cache hit for the full TTL, so later requests never retry scope discovery. This is especially visible on the default GitHubProvider(required_scopes=['user']) path, where incomplete scope data is accepted and memoized.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — fixed in a756e0f. Caching is now conditional on scopes_response.status_code == 200.

@jlowin jlowin merged commit 269c9c9 into main Mar 18, 2026
8 of 12 checks passed
@jlowin jlowin deleted the fix/token-cache-github-verifier branch March 18, 2026 19:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

auth Related to authentication (Bearer, JWT, OAuth, WorkOS) for client or server. enhancement Improvement to existing functionality. For issues and smaller PR improvements. server Related to FastMCP server implementation or server-side functionality.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GitHubTokenVerifier lacks cache_ttl_seconds option

1 participant