Extract TokenCache utility, add caching to GitHubTokenVerifier#3547
Extract TokenCache utility, add caching to GitHubTokenVerifier#3547
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 381895ef89
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if ttl_seconds is not None and ttl_seconds < 0: | ||
| raise ValueError( | ||
| f"cache_ttl_seconds must be non-negative, got {ttl_seconds}" | ||
| ) | ||
| if max_size is not None and max_size < 0: | ||
| raise ValueError(f"max_cache_size must be non-negative, got {max_size}") |
There was a problem hiding this comment.
Preserve negative cache settings as disabled
IntrospectionTokenVerifier previously treated cache_ttl_seconds <= 0 and max_cache_size <= 0 as “cache disabled”, and PropelAuthProvider._create_token_verifier still forwards caller-provided overrides straight through. Raising here turns existing configs that use -1 as a sentinel to disable caching into startup-time ValueErrors instead of a no-op cache, which is a behavior regression for deployed introspection/PropelAuth setups.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
This is a new API surface — GitHubTokenVerifier didn't have caching before, and IntrospectionTokenVerifier never documented -1 as a sentinel. Negative TTL is almost certainly a caller bug, and failing fast is clearer than silently disabling.
Test Failure AnalysisSummary: The Root Cause: PR #3550 added Suggested Solution: PR #3553 ( Detailed AnalysisFailure log: Root file: Version constraint in anthropic = ["anthropic>=0.40.0"]
Note: This comment was edited to reflect the latest workflow run (23263028302). Previous analysis covered an older run. Related Files
🤖 Generated with Claude Code |
Extract the caching machinery from IntrospectionTokenVerifier into a shared TokenCache class in fastmcp.utilities.token_cache, then wire it into both IntrospectionTokenVerifier and GitHubTokenVerifier.
381895e to
fc9fa3a
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fc9fa3a8ba
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| self._maybe_cleanup() | ||
| self._enforce_size_limit() | ||
|
|
||
| cache_key = self._hash_token(token) |
There was a problem hiding this comment.
Compute cache_key before evicting from a full TokenCache
When the cache is already at max_size and set() is called for a token that is already present, _enforce_size_limit() evicts an unrelated entry before we know this write is only an overwrite. I verified with max_size=2 that set('C') twice leaves the cache with a single entry. In practice this happens under overlapping verify_token() calls for the same uncached token, so the new shared cache can silently lose capacity under concurrent auth traffic and deliver much less reuse than configured.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Good catch — fixed in a756e0f. The cache key is now computed before enforcing the size limit, and eviction is skipped for overwrites.
| "github_user_data": user_data, | ||
| }, | ||
| ) | ||
| self._cache.set(token, result) |
There was a problem hiding this comment.
Skip caching when GitHub scope lookup is non-200
With caching enabled, a successful /user response followed by a non-200 /user/repos response still falls through to token_scopes = ['user'] and is cached here. That means a transient GitHub rate-limit/5xx on the scope probe becomes a positive cache hit for the full TTL, so later requests never retry scope discovery. This is especially visible on the default GitHubProvider(required_scopes=['user']) path, where incomplete scope data is accepted and memoized.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Good catch — fixed in a756e0f. Caching is now conditional on scopes_response.status_code == 200.
GitHubTokenVerifiermakes two uncached HTTP calls to GitHub's API on every authenticated request (~600ms round-trip), whileIntrospectionTokenVerifierhas had full caching support since #3298. Rather than duplicate the ~80 lines of caching machinery, this extracts it into a sharedTokenCacheutility that both verifiers now compose.TokenCacheprovides SHA-256 key hashing, TTL with token-expiry awareness, bounded size with FIFO eviction, periodic cleanup, and defensive deep copies — the same behaviorIntrospectionTokenVerifierhad inline, now reusable.IntrospectionTokenVerifieris refactored to use it (pure refactor, no behavioral change), andGitHubTokenVerifiergainscache_ttl_secondsandmax_cache_sizeparameters with caching disabled by default.Closes #3543