-
Notifications
You must be signed in to change notification settings - Fork 2.8k
fix: implement model cache refresh to prevent stale disk cache #9478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add refreshModels() function to force fresh API fetch bypassing cache - Add initializeModelCacheRefresh() for background refresh on extension load - Update flushModels() with optional refresh parameter - Refresh public providers (OpenRouter, Glama, Vercel AI Gateway) on startup - Update manual refresh triggers to actually fetch fresh data - Use atomic writes to keep cache available during refresh Previously, the disk cache was written once and never refreshed, causing stale model info (pricing, context windows, new models) to persist indefinitely. This fix ensures the cache is refreshed on extension load and whenever users manually refresh models.
Review complete. The previously identified issue has been resolved.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
Keep old cache in memory when refresh=true to avoid gap in cache availability. This prevents getModels() from triggering a fresh API fetch if called immediately after flushModels(router, true). The refreshModels() function will atomically replace the memory cache when the refresh completes, maintaining graceful degradation.
| try { | ||
| // Force fresh API fetch - skip getModelsFromCache() check | ||
| switch (provider) { | ||
| case "openrouter": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little nervous about having this list of providers here in a way that we forget to update when we add a new provider. Any ideas on how to prevent that from happening?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, let me fix that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there at least a way to combine with the getModels above?
Extract the provider switch statement into a shared fetchModelsFromProvider() function to eliminate duplication between getModels() and refreshModels(). This ensures we only maintain the provider list in one place, making it easier to add new providers without forgetting to update both functions.
Problem
The on-disk model cache was never refreshed after initial creation, causing:
Previously,
flushModels()only cleared the memory cache (5-minute TTL), but the disk cache remained untouched. After memory cache expiration,getModelsFromCache()would reload from stale disk cache, preventing any fresh API calls.Solution
1. Add
refreshModels()function2. Add
initializeModelCacheRefresh()3. Update
flushModels()signaturerefreshparameterrefresh=true, triggers background API fetch after flush4. Update all manual refresh triggers
Cache Freshness
With this fix:
Testing
All existing tests pass. The cache refresh is:
Changes
src/api/providers/fetchers/modelCache.ts: Add refresh functionssrc/extension.ts: Initialize background refresh on activationsrc/core/webview/webviewMessageHandler.ts: Update manual refresh handlerssrc/api/providers/fetchers/lmstudio.ts: Simplify with new refresh API