-
-
Notifications
You must be signed in to change notification settings - Fork 317
feat(statusline): fetching context limit per model dynamically from litellm #514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Replace fixed 200K context limit with dynamic model-specific limits - Remove CONTEXT_LIMIT from _consts.ts as it is no longer needed - Remove corresponding import from data-loader.ts This prepares for implementing dynamic context limits per model
Add max_tokens, max_input_tokens, and max_output_tokens fields - These fields will be used to determine model-specific context limits - Matches LiteLLM pricing data structure - All fields are optional to maintain backward compatibility
Add method to retrieve model-specific context limits from LiteLLM data - Returns max_input_tokens or falls back to max_tokens - Returns null if model not found or no context limit data available - Uses existing Result pattern for error handling
Pass model ID and offline flag to calculateContextTokens - Enable model-specific context limit calculation - Update function call to include modelId from hookData - Pass offline flag for consistent behavior
Add npm scripts to test statusline with different Claude 4 models - test:statusline:sonnet4 - Test with Claude 4 Sonnet - test:statusline:opus4 - Test with Claude 4.1 Opus - test:statusline:sonnet41 - Test with Claude 4.1 Sonnet - test:statusline:all - Run all model tests in sequence All tests use --offline flag for consistent testing
Add test fixtures for different Claude 4 model variants - statusline-test-sonnet4.json - Claude 4 Sonnet test data - statusline-test-opus4.json - Claude 4.1 Opus test data - statusline-test-sonnet41.json - Claude 4.1 Sonnet test data Each test file contains hookData with 3000 input tokens for testing context percentage calculations
Add comprehensive test coverage for dynamic context limits - Add getModelContextLimit tests in pricing-fetcher.ts - Add context percentage calculation tests in statusline.ts - Test model data presence/absence scenarios - Verify consistent percentage calculations across Claude 4 models
Update symbol index status memory to reflect recent changes - Document new getModelContextLimit method - Record enhanced calculateContextTokens functionality - Add testing commands for Claude 4 model variants - Update statistics after symbol index refresh - Mark feature as fully operational and tested
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Warning Rate limit exceeded@ryoppippi has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 15 minutes and 42 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughRemoves a fixed CONTEXT_LIMIT, adds model-specific context limits to pricing schema and PricingFetcher, updates calculateContextTokens and statusline to use model-aware limits (with offline fallback), adds statusline test fixtures/scripts, and documents symbol index onboarding/status. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Statusline
participant DataLoader as calculateContextTokens
participant Pricing as PricingFetcher
participant PricingData as Pricing Source
User->>Statusline: run statusline (session with model id, offline?)
Statusline->>DataLoader: calculateContextTokens(transcriptPath, modelId, offline)
DataLoader->>Pricing: getModelContextLimit(modelId) (honor offline)
Pricing->>PricingData: getModelPricing(modelId)
PricingData-->>Pricing: pricing (may include max_input_tokens/max_tokens)
Pricing-->>DataLoader: numeric contextLimit | null
DataLoader-->>Statusline: { inputTokens, percentage, contextLimit } | null
Statusline-->>User: render statusline with context usage
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @ryoppippi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces dynamic context window limits for Claude models, moving away from a fixed 200K token limit. It integrates with LiteLLM to fetch model-specific context limits, ensuring more accurate context usage calculations and improved status line reporting.
Highlights
- Dynamic Context Calculation: Replaced the hardcoded 200K CONTEXT_LIMIT with model-specific values fetched dynamically.
- LiteLLM Integration: Implemented fetching of context limits (max_input_tokens or max_tokens) directly from LiteLLM pricing data.
- Enhanced Type Safety: Updated modelPricingSchema to include new optional fields for max_tokens, max_input_tokens, and max_output_tokens.
- Improved Statusline: Modified the statusline command to pass the model ID, enabling dynamic context percentage calculation based on the active model's limits.
- Comprehensive Testing: Added new test fixtures and package.json scripts for various Claude 4 model variants (Sonnet 4, Opus 4.1, Sonnet 4.1) to validate dynamic context limit behavior.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
Deploying with
|
| Status | Name | Latest Commit | Preview URL | Updated (UTC) |
|---|---|---|---|---|
| ✅ Deployment successful! View logs |
ccusage-guide | d62807f | Commit Preview URL Branch Preview URL |
Aug 15 2025, 11:37 AM |
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively implements dynamic context limits for Claude models by fetching them from LiteLLM data, which is a great improvement over the previous hardcoded value. The changes are well-structured, including updates to data schemas, the pricing fetcher, and the statusline command. The addition of new test fixtures and scripts for different Claude models is also a valuable contribution to ensure the feature works as expected.
My review includes a couple of suggestions to improve code conciseness and maintainability, particularly in one of the new tests and in the handling of asynchronous results. Overall, this is a solid implementation of the new feature.
src/commands/statusline.ts
Outdated
| // Verify this applies to all model variants | ||
| const opusPercentage = Math.round((tokenCount / 200_000) * 100); | ||
| const sonnet4Percentage = Math.round((tokenCount / 200_000) * 100); | ||
| const sonnet41Percentage = Math.round((tokenCount / 200_000) * 100); | ||
|
|
||
| expect(opusPercentage).toBe(2); | ||
| expect(sonnet4Percentage).toBe(2); | ||
| expect(sonnet41Percentage).toBe(2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| using fetcher = new PricingFetcher(offline); | ||
| const contextLimitResult = await fetcher.getModelContextLimit(modelId); | ||
| if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) { | ||
| contextLimit = contextLimitResult.value; | ||
| } | ||
| else if (Result.isSuccess(contextLimitResult)) { | ||
| // Context limit not available for this model in LiteLLM | ||
| logger.debug(`No context limit data available for model ${modelId} in LiteLLM`); | ||
| } | ||
| else { | ||
| // Error occurred | ||
| logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The if/else if/else chain used to handle the Result type can be simplified. Using a Result.pipe with Result.inspect and Result.inspectError would be more idiomatic for the @praha/byethrow library and can make the code more declarative and easier to read.
using fetcher = new PricingFetcher(offline);
await Result.pipe(
fetcher.getModelContextLimit(modelId),
Result.inspect((value) => {
if (value != null) {
contextLimit = value;
} else {
logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
}
}),
Result.inspectError((error) => {
logger.debug(`Failed to get context limit for model ${modelId}: ${error.message}`);
}),
);There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (6)
test/statusline-test-sonnet41.json (1)
4-12: Nit: Avoid hardcoded absolute paths in fixturesUsing
/Users/test/projectis macOS-specific. If the statusline logic ever touches these paths, tests may behave differently on Linux/CI. Consider using a relative test path (e.g.,test/project) to keep fixtures portable.- "cwd": "/Users/test/project", + "cwd": "test/project", "model": { "id": "claude-sonnet-4-1-20250805", "display_name": "Sonnet 4.1" }, "workspace": { - "current_dir": "/Users/test/project", - "project_dir": "/Users/test/project" + "current_dir": "test/project", + "project_dir": "test/project" }test/statusline-test-opus4.json (1)
4-12: Nit: Prefer portable paths in test JSONSame note as the Sonnet 4.1 fixture; consider replacing
/Users/test/projectwith a relative path to improve portability.- "cwd": "/Users/test/project", + "cwd": "test/project", "model": { "id": "claude-opus-4-1-20250805", "display_name": "Opus 4.1" }, "workspace": { - "current_dir": "/Users/test/project", - "project_dir": "/Users/test/project" + "current_dir": "test/project", + "project_dir": "test/project" }test/statusline-test-sonnet4.json (1)
4-12: Nit: Replace absolute paths with relative for cross-platform testsTo avoid OS-specific assumptions, consider using a relative path for
cwd/workspaceliketest/project.- "cwd": "/Users/test/project", + "cwd": "test/project", "model": { "id": "claude-sonnet-4-20250514", "display_name": "Sonnet 4" }, "workspace": { - "current_dir": "/Users/test/project", - "project_dir": "/Users/test/project" + "current_dir": "test/project", + "project_dir": "test/project" }src/_types.ts (2)
132-135: Validate context limits as positive integersLiteLLM context-window fields represent token counts; constrain them as integers and positive to catch bad data early.
- // Context window limits from LiteLLM data - max_tokens: z.number().optional(), - max_input_tokens: z.number().optional(), - max_output_tokens: z.number().optional(), + // Context window limits from LiteLLM data (token counts) + max_tokens: z.number().int().positive().optional(), + max_input_tokens: z.number().int().positive().optional(), + max_output_tokens: z.number().int().positive().optional(),
132-135: Lint hint appears spurious; confirm typecheck/lint passStatic-analysis hints flag
z.number().optional()as unsafe here, but the same pattern is used above (lines 128–131) without issues. Likely a false positive. Please re-run typecheck/lint to confirm:
- bun typecheck
- bun run lint
If it persists, we can switch to
const TokenCount = z.number().int().positive();and reuse it to appease the linter:+const TokenCount = z.number().int().positive(); export const modelPricingSchema = z.object({ input_cost_per_token: z.number().optional(), output_cost_per_token: z.number().optional(), cache_creation_input_token_cost: z.number().optional(), cache_read_input_token_cost: z.number().optional(), // Context window limits from LiteLLM data (token counts) - max_tokens: z.number().optional(), - max_input_tokens: z.number().optional(), - max_output_tokens: z.number().optional(), + max_tokens: TokenCount.optional(), + max_input_tokens: TokenCount.optional(), + max_output_tokens: TokenCount.optional(), });src/data-loader.ts (1)
1438-1481: Consider simplifying error messages for clarityThe error catch blocks now say "Skip malformed JSON lines" but the actual issue could be any JSON parsing error, not just malformed structure. Consider keeping the original simpler message.
} catch { - continue; // Skip malformed JSON lines + continue; // Skip invalid JSON lines }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (10)
.lsmcp/memories/symbol_index_status.md(1 hunks)package.json(1 hunks)src/_consts.ts(0 hunks)src/_types.ts(1 hunks)src/commands/statusline.ts(2 hunks)src/data-loader.ts(4 hunks)src/pricing-fetcher.ts(2 hunks)test/statusline-test-opus4.json(1 hunks)test/statusline-test-sonnet4.json(1 hunks)test/statusline-test-sonnet41.json(1 hunks)
💤 Files with no reviewable changes (1)
- src/_consts.ts
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.{ts,tsx}: Use tab indentation
Use double quotes for strings
Do not use console.log except where explicitly disabled; use logger.ts instead
File paths must use Node.js path utilities for cross-platform compatibility
Use .ts extensions for local file imports (e.g., import { foo } from './utils.ts')
Prefer @praha/byethrow Result type over traditional try-catch for functional error handling
Use Result.try() to wrap operations that may throw (e.g., JSON parsing)
Use Result.isFailure() and Result.isSuccess() type guards instead of negations
Prefer early return pattern over ternary operators when handling Result
Keep traditional try-catch only for file I/O with complex handling or legacy code that's hard to refactor
For async operations, create a wrapper function with Result.try() and call it
Variables use camelCase (e.g., usageDataSchema, modelBreakdownSchema)
Types use PascalCase (e.g., UsageData, ModelBreakdown)
Constants may use UPPER_SNAKE_CASE (e.g., DEFAULT_CLAUDE_CODE_PATH)
Use in-source testing blocks guarded by if (import.meta.vitest != null); do not create separate test files
Files:
src/pricing-fetcher.tssrc/_types.tssrc/commands/statusline.tssrc/data-loader.ts
**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.ts: Only export constants, functions, and types that are used by other modules; keep private values non-exported
Dynamic imports using await import() should only appear within in-source test blocks
Vitest globals (describe, it, expect) are available automatically without imports
Use fs-fixture createFixture() for mock data in tests that simulate Claude data directories
Model names in tests must use current Claude 4 models (Sonnet and Opus) and cover both for test coverage
Model naming must follow claude-{model-type}-{generation}-{date} (e.g., claude-sonnet-4-20250514)
Model names in tests must exactly match LiteLLM pricing database; verify existence before adding new tests
Cost calculations should only target models supported by LiteLLM; unsupported models cannot be calculated
Files:
src/pricing-fetcher.tssrc/_types.tssrc/commands/statusline.tssrc/data-loader.ts
src/**/_*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Internal files should use underscore prefix (e.g., _types.ts, _utils.ts, _consts.ts)
Files:
src/_types.ts
package.json
📄 CodeRabbit Inference Engine (CLAUDE.md)
Dependencies should be added as devDependencies unless explicitly required otherwise
Files:
package.json
🧠 Learnings (3)
📚 Learning: 2025-08-13T12:07:22.065Z
Learnt from: CR
PR: ryoppippi/ccusage#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-13T12:07:22.065Z
Learning: After code changes, always run: bun run format, bun typecheck, bun run test
Applied to files:
package.json
📚 Learning: 2025-08-13T12:07:22.065Z
Learnt from: CR
PR: ryoppippi/ccusage#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-13T12:07:22.065Z
Learning: Use indexed symbol tools (search_symbol_from_index, get_document_symbols, get_definitions, find_references) and avoid reading entire files unless necessary
Applied to files:
.lsmcp/memories/symbol_index_status.md
📚 Learning: 2025-08-13T12:07:22.065Z
Learnt from: CR
PR: ryoppippi/ccusage#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-13T12:07:22.065Z
Learning: Applies to **/*.ts : Model names in tests must use current Claude 4 models (Sonnet and Opus) and cover both for test coverage
Applied to files:
src/commands/statusline.ts
🧬 Code Graph Analysis (3)
test/statusline-test-opus4.json (1)
src/debug.ts (1)
fixture(446-475)
src/commands/statusline.ts (1)
src/data-loader.ts (1)
calculateContextTokens(1412-1488)
src/data-loader.ts (2)
src/pricing-fetcher.ts (1)
PricingFetcher(22-288)src/logger.ts (1)
logger(19-19)
🪛 ESLint
src/pricing-fetcher.ts
[error] 197-197: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 197-197: Unsafe member access .pipe on an error typed value.
(ts/no-unsafe-member-access)
[error] 199-199: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 199-199: Unsafe member access .map on an error typed value.
(ts/no-unsafe-member-access)
[error] 204-204: Unsafe assignment of an any value.
(ts/no-unsafe-assignment)
[error] 204-204: Unsafe member access .max_input_tokens on an any value.
(ts/no-unsafe-member-access)
[error] 204-204: Unsafe member access .max_tokens on an any value.
(ts/no-unsafe-member-access)
[error] 209-209: Unsafe return of a value of type any.
(ts/no-unsafe-return)
[error] 658-658: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 658-658: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 658-658: Unsafe member access .unwrap on an error typed value.
(ts/no-unsafe-member-access)
[error] 666-666: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 666-666: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 666-666: Unsafe member access .unwrap on an error typed value.
(ts/no-unsafe-member-access)
[error] 682-682: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 682-682: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 682-682: Unsafe member access .unwrap on an error typed value.
(ts/no-unsafe-member-access)
src/_types.ts
[error] 133-133: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 133-133: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 133-133: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 133-133: Unsafe member access .number on an error typed value.
(ts/no-unsafe-member-access)
[error] 133-133: Unsafe member access .optional on an error typed value.
(ts/no-unsafe-member-access)
[error] 134-134: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 134-134: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 134-134: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 134-134: Unsafe member access .number on an error typed value.
(ts/no-unsafe-member-access)
[error] 134-134: Unsafe member access .optional on an error typed value.
(ts/no-unsafe-member-access)
[error] 135-135: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 135-135: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 135-135: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 135-135: Unsafe member access .number on an error typed value.
(ts/no-unsafe-member-access)
[error] 135-135: Unsafe member access .optional on an error typed value.
(ts/no-unsafe-member-access)
src/commands/statusline.ts
[error] 167-167: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 167-167: Unsafe member access .pipe on an error typed value.
(ts/no-unsafe-member-access)
[error] 168-168: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 168-168: Unsafe member access .try on an error typed value.
(ts/no-unsafe-member-access)
[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string.
(ts/no-unsafe-argument)
[error] 169-169: Unsafe member access .transcript_path on an error typed value.
(ts/no-unsafe-member-access)
[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string | undefined.
(ts/no-unsafe-argument)
[error] 169-169: Unsafe member access .model on an error typed value.
(ts/no-unsafe-member-access)
[error] 169-169: Unsafe argument of type any assigned to a parameter of type boolean.
(ts/no-unsafe-argument)
[error] 169-169: Unsafe member access .values on an any value.
(ts/no-unsafe-member-access)
src/data-loader.ts
[error] 1457-1457: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 1458-1458: Unexpected any value in conditional. An explicit comparison or type conversion is required.
(ts/strict-boolean-expressions)
[error] 1458-1458: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1458-1458: Unsafe member access .isSuccess on an error typed value.
(ts/no-unsafe-member-access)
[error] 1458-1458: Unsafe member access .value on an error typed value.
(ts/no-unsafe-member-access)
[error] 1459-1459: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 1459-1459: Unsafe member access .value on an error typed value.
(ts/no-unsafe-member-access)
[error] 1461-1461: Unexpected any value in conditional. An explicit comparison or type conversion is required.
(ts/strict-boolean-expressions)
[error] 1461-1461: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1461-1461: Unsafe member access .isSuccess on an error typed value.
(ts/no-unsafe-member-access)
[error] 1463-1463: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1463-1463: Unsafe member access .debug on an error typed value.
(ts/no-unsafe-member-access)
[error] 1467-1467: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1467-1467: Unsafe member access .debug on an error typed value.
(ts/no-unsafe-member-access)
[error] 1467-1467: Unsafe member access .error on an error typed value.
(ts/no-unsafe-member-access)
[error] 1474-1474: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
🪛 GitHub Actions: CI
src/pricing-fetcher.ts
[error] 656-656: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.
[error] 664-664: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.
[error] 680-680: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.
🪛 LanguageTool
.lsmcp/memories/symbol_index_status.md
[grammar] ~8-~8: There might be a mistake here.
Context: ...oppippi/ghq/github.com/ryoppippi/ccusage Language: TypeScript/JavaScript Pattern ...
(QB_NEW_EN)
[grammar] ~9-~9: There might be a mistake here.
Context: .../ccusage Language: TypeScript/JavaScript Pattern Used: */.{ts,tsx,js,jsx} (aut...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...X STATISTICS: - Total files indexed: 34 - Total symbols: 209 - Indexing time: 1.44...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ...l files indexed: 34 - Total symbols: 209 - Indexing time: 1.44 seconds - Average ti...
(QB_NEW_EN)
[grammar] ~16-~16: There might be a mistake here.
Context: ...mbols: 209 - Indexing time: 1.44 seconds - Average time per file: 41ms KEY SYMBOLS...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...ormatRemainingTime, calculateCost, etc.) - Core command handlers in src/commands/ -...
(QB_NEW_EN)
[grammar] ~22-~22: There might be a mistake here.
Context: ...- Core command handlers in src/commands/ - Utility functions in src/_*.ts files -...
(QB_NEW_EN)
[grammar] ~23-~23: There might be a mistake here.
Context: ...- Utility functions in src/_*.ts files - Type definitions and schemas RECENT ADD...
(QB_NEW_EN)
[grammar] ~28-~28: There might be a mistake here.
Context: ...extLimitmethod in PricingFetcher class - EnhancedcalculateContextTokens` functi...
(QB_NEW_EN)
[grammar] ~29-~29: There might be a mistake here.
Context: ...ens` function with model-specific limits - Updated statusline command to support dy...
(QB_NEW_EN)
[grammar] ~30-~30: There might be a mistake here.
Context: ... to support dynamic context calculations - Added context limit fields to modelPrici...
(QB_NEW_EN)
[grammar] ~31-~31: There might be a mistake here.
Context: ...ns, max_input_tokens, max_output_tokens) - Comprehensive test coverage for new func...
(QB_NEW_EN)
[grammar] ~32-~32: There might be a mistake here.
Context: ...sive test coverage for new functionality - Test fixtures for Claude 4 model variant...
(QB_NEW_EN)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Workers Builds: ccusage-guide
🔇 Additional comments (7)
test/statusline-test-opus4.json (1)
1-13: Approve: fixture consistent & Opus 4.1 present in LiteLLM pricingVerified locally and upstream:
- Local fixtures: test/statusline-test-opus4.json and test/statusline-test.json contain "id": "claude-opus-4-1-20250805"; test/test-transcript.jsonl exists.
- LiteLLM pricing includes the model (bedrock/us.anthropic.claude-opus-4-1-20250805‑v1:0) with max_input_tokens=200,000 and max_tokens (max_output_tokens)=32,000 — so getModelContextLimit() should return a non-null value.
No changes required; approving.
test/statusline-test-sonnet4.json (1)
1-13: Solid addition; model ID matches expected namingThis fixture complements the 4.1 variants and aligns with the dynamic context limit changes. No issues spotted.
package.json (2)
56-59: New statusline test scripts are well-scoped and offline-friendlyScripts cover Sonnet 4, Opus 4.1, and Sonnet 4.1 individually and in aggregate. Using
--offlineis appropriate for deterministic tests. Looks good.
56-59: Verified: fixtures present & --offline flag correctly wiredAll referenced fixtures exist, and the statusline command exposes/passes the offline flag (and model id) down to context/pricing code.
- Fixtures found: test/statusline-test.json, test/statusline-test-sonnet4.json, test/statusline-test-opus4.json, test/statusline-test-sonnet41.json.
- package.json scripts (≈ lines 55–59) call the statusline command (model-specific scripts pass --offline).
- src/commands/statusline.ts — defines the offline arg (default true) and calls:
- calculateContextTokens(hookData.transcript_path, hookData.model.id, ctx.values.offline) (see occurrences around lines ~33, ~71, ~169).
- src/data-loader.ts — calculateContextTokens(transcriptPath, modelId?, offline = false) and constructs new PricingFetcher(offline) (see around lines ~1412 and ~1456).
No code changes required. Please proceed with the usual workflow checks (bun run format, bun typecheck, bun run test).
src/commands/statusline.ts (1)
166-169: Context info calculation looks good!The implementation correctly passes the model ID and offline flag to calculate context tokens with model-specific limits. The error handling is appropriate, logging debug messages when calculation fails.
.lsmcp/memories/symbol_index_status.md (1)
1-57: Documentation looks comprehensive!The symbol index status document provides excellent documentation of the dynamic context limits feature, including clear descriptions of the new functionality, test coverage details, and testing commands.
src/data-loader.ts (1)
1412-1415: Well-structured API changes!The updated function signature properly accepts optional
modelIdandofflineparameters with sensible defaults. The return type now includes thecontextLimitin the response object, which is excellent for transparency.
| // Get context limit from PricingFetcher | ||
| let contextLimit = 200_000; // Fallback for when modelId is not provided | ||
| if (modelId != null && modelId !== '') { | ||
| using fetcher = new PricingFetcher(offline); | ||
| const contextLimitResult = await fetcher.getModelContextLimit(modelId); | ||
| if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) { | ||
| contextLimit = contextLimitResult.value; | ||
| } | ||
| else if (Result.isSuccess(contextLimitResult)) { | ||
| // Context limit not available for this model in LiteLLM | ||
| logger.debug(`No context limit data available for model ${modelId} in LiteLLM`); | ||
| } | ||
| else { | ||
| // Error occurred | ||
| logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Result handling in calculateContextTokens
The static analysis has flagged several TypeScript safety issues with the Result handling. The main issue is that getModelContextLimit returns a ResultAsync, not a synchronous Result, so you need to await it before checking its status.
Apply this fix to properly handle the async Result:
// Get context limit from PricingFetcher
let contextLimit = 200_000; // Fallback for when modelId is not provided
if (modelId != null && modelId !== '') {
using fetcher = new PricingFetcher(offline);
- const contextLimitResult = await fetcher.getModelContextLimit(modelId);
- if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) {
- contextLimit = contextLimitResult.value;
- }
- else if (Result.isSuccess(contextLimitResult)) {
- // Context limit not available for this model in LiteLLM
- logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
- }
- else {
- // Error occurred
- logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`);
- }
+ const contextLimitResult = await fetcher.getModelContextLimit(modelId);
+ Result.match(contextLimitResult, {
+ success: (value) => {
+ if (value != null) {
+ contextLimit = value;
+ } else {
+ // Context limit not available for this model in LiteLLM
+ logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
+ }
+ },
+ failure: (error) => {
+ // Error occurred
+ logger.debug(`Failed to get context limit for model ${modelId}: ${error.message}`);
+ }
+ });
}Or using the more idiomatic Result.unwrap approach with a default:
// Get context limit from PricingFetcher
let contextLimit = 200_000; // Fallback for when modelId is not provided
if (modelId != null && modelId !== '') {
using fetcher = new PricingFetcher(offline);
- const contextLimitResult = await fetcher.getModelContextLimit(modelId);
- if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) {
- contextLimit = contextLimitResult.value;
- }
- else if (Result.isSuccess(contextLimitResult)) {
- // Context limit not available for this model in LiteLLM
- logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
- }
- else {
- // Error occurred
- logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`);
- }
+ const result = await Result.unwrap(fetcher.getModelContextLimit(modelId), null);
+ if (result != null) {
+ contextLimit = result;
+ } else {
+ logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get context limit from PricingFetcher | |
| let contextLimit = 200_000; // Fallback for when modelId is not provided | |
| if (modelId != null && modelId !== '') { | |
| using fetcher = new PricingFetcher(offline); | |
| const contextLimitResult = await fetcher.getModelContextLimit(modelId); | |
| if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) { | |
| contextLimit = contextLimitResult.value; | |
| } | |
| else if (Result.isSuccess(contextLimitResult)) { | |
| // Context limit not available for this model in LiteLLM | |
| logger.debug(`No context limit data available for model ${modelId} in LiteLLM`); | |
| } | |
| else { | |
| // Error occurred | |
| logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`); | |
| } | |
| } | |
| // Get context limit from PricingFetcher | |
| let contextLimit = 200_000; // Fallback for when modelId is not provided | |
| if (modelId != null && modelId !== '') { | |
| using fetcher = new PricingFetcher(offline); | |
| const contextLimitResult = await fetcher.getModelContextLimit(modelId); | |
| Result.match(contextLimitResult, { | |
| success: (value) => { | |
| if (value != null) { | |
| contextLimit = value; | |
| } else { | |
| // Context limit not available for this model in LiteLLM | |
| logger.debug(`No context limit data available for model ${modelId} in LiteLLM`); | |
| } | |
| }, | |
| failure: (error) => { | |
| // Error occurred | |
| logger.debug(`Failed to get context limit for model ${modelId}: ${error.message}`); | |
| }, | |
| }); | |
| } |
🧰 Tools
🪛 ESLint
[error] 1457-1457: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 1458-1458: Unexpected any value in conditional. An explicit comparison or type conversion is required.
(ts/strict-boolean-expressions)
[error] 1458-1458: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1458-1458: Unsafe member access .isSuccess on an error typed value.
(ts/no-unsafe-member-access)
[error] 1458-1458: Unsafe member access .value on an error typed value.
(ts/no-unsafe-member-access)
[error] 1459-1459: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 1459-1459: Unsafe member access .value on an error typed value.
(ts/no-unsafe-member-access)
[error] 1461-1461: Unexpected any value in conditional. An explicit comparison or type conversion is required.
(ts/strict-boolean-expressions)
[error] 1461-1461: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1461-1461: Unsafe member access .isSuccess on an error typed value.
(ts/no-unsafe-member-access)
[error] 1463-1463: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1463-1463: Unsafe member access .debug on an error typed value.
(ts/no-unsafe-member-access)
[error] 1467-1467: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 1467-1467: Unsafe member access .debug on an error typed value.
(ts/no-unsafe-member-access)
[error] 1467-1467: Unsafe member access .error on an error typed value.
(ts/no-unsafe-member-access)
🤖 Prompt for AI Agents
In src/data-loader.ts around lines 1453 to 1469, the code calls
fetcher.getModelContextLimit which returns a ResultAsync but is treated
synchronously; change to await the async result (e.g., const contextLimitResult
= await fetcher.getModelContextLimit(modelId)) before checking Result.isSuccess,
then handle the success case by reading contextLimitResult.value, the
empty-success case with a debug log, and the failure case by logging
contextLimitResult.error.message; alternatively, use Result.unwrapOr/default to
get a numeric default contextLimit after awaiting the ResultAsync.
| /** | ||
| * Gets context window limit for a specific model from LiteLLM data | ||
| * @param modelName - The model name to get context limit for | ||
| * @returns The context limit in tokens, or null if not found | ||
| */ | ||
| async getModelContextLimit(modelName: string): Result.ResultAsync<number | null, Error> { | ||
| return Result.pipe( | ||
| this.getModelPricing(modelName), | ||
| Result.map((pricing) => { | ||
| if (pricing == null) { | ||
| return null; // Model not found in LiteLLM pricing data | ||
| } | ||
|
|
||
| const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens; | ||
| if (contextLimit == null) { | ||
| return null; // No context limit data available for model | ||
| } | ||
|
|
||
| return contextLimit; | ||
| }), | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix TypeScript errors in getModelContextLimit method
The static analysis has flagged several TypeScript safety issues in this method. The main issue is incorrect Result handling - you're using Result.pipe and Result.map but the return type indicates a ResultAsync, which means these operations should be async-aware.
Apply this fix to properly handle the Result types:
async getModelContextLimit(modelName: string): Result.ResultAsync<number | null, Error> {
return Result.pipe(
- this.getModelPricing(modelName),
+ await this.getModelPricing(modelName),
Result.map((pricing) => {
if (pricing == null) {
return null; // Model not found in LiteLLM pricing data
}
const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens;
if (contextLimit == null) {
return null; // No context limit data available for model
}
return contextLimit;
}),
);
}Alternatively, since getModelPricing returns a ResultAsync, you could use the async chain directly:
async getModelContextLimit(modelName: string): Result.ResultAsync<number | null, Error> {
- return Result.pipe(
- this.getModelPricing(modelName),
- Result.map((pricing) => {
- if (pricing == null) {
- return null; // Model not found in LiteLLM pricing data
- }
-
- const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens;
- if (contextLimit == null) {
- return null; // No context limit data available for model
- }
-
- return contextLimit;
- }),
- );
+ const pricingResult = await this.getModelPricing(modelName);
+ return Result.map(pricingResult, (pricing) => {
+ if (pricing == null) {
+ return null; // Model not found in LiteLLM pricing data
+ }
+
+ const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens;
+ if (contextLimit == null) {
+ return null; // No context limit data available for model
+ }
+
+ return contextLimit;
+ });
}🧰 Tools
🪛 ESLint
[error] 197-197: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 197-197: Unsafe member access .pipe on an error typed value.
(ts/no-unsafe-member-access)
[error] 199-199: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 199-199: Unsafe member access .map on an error typed value.
(ts/no-unsafe-member-access)
[error] 204-204: Unsafe assignment of an any value.
(ts/no-unsafe-assignment)
[error] 204-204: Unsafe member access .max_input_tokens on an any value.
(ts/no-unsafe-member-access)
[error] 204-204: Unsafe member access .max_tokens on an any value.
(ts/no-unsafe-member-access)
[error] 209-209: Unsafe return of a value of type any.
(ts/no-unsafe-return)
🤖 Prompt for AI Agents
In src/pricing-fetcher.ts around lines 191 to 212, the method uses
Result.pipe/Result.map on a ResultAsync which is incorrect; replace the pipe
usage with the async-aware chain by returning the ResultAsync from
getModelPricing(modelName) and calling its .map (or equivalent ResultAsync.map)
to transform the pricing to a number|null, keeping the same null checks for
pricing and contextLimit. Ensure the returned value is the ResultAsync from that
chain (not a plain Result), so types align with the declared
Result.ResultAsync<number | null, Error> return type.
| describe('getModelContextLimit', () => { | ||
| it('should return context limit from pricing data when available', async () => { | ||
| using fetcher = new PricingFetcher(true); // Use offline mode with cached data | ||
|
|
||
| // Mock pricing data with context limits | ||
| const mockPricing = new Map([ | ||
| ['test-model', { | ||
| input_cost_per_token: 0.00001, | ||
| output_cost_per_token: 0.00003, | ||
| max_input_tokens: 100_000, | ||
| }], | ||
| ]); | ||
| fetcher.cachedPricing = mockPricing; | ||
|
|
||
| const contextLimit = await Result.unwrap(fetcher.getModelContextLimit('test-model')); | ||
| expect(contextLimit).toBe(100_000); | ||
| }); | ||
|
|
||
| it('should return null when model not found', async () => { | ||
| using fetcher = new PricingFetcher(true); | ||
| fetcher.cachedPricing = new Map(); // Empty cache | ||
|
|
||
| const result = await Result.unwrap(fetcher.getModelContextLimit('unknown-model')); | ||
| expect(result).toBeNull(); | ||
| }); | ||
|
|
||
| it('should return null when context fields are not available', async () => { | ||
| using fetcher = new PricingFetcher(true); | ||
|
|
||
| // Mock pricing data without context limits | ||
| const mockPricing = new Map([ | ||
| ['test-model', { | ||
| input_cost_per_token: 0.00001, | ||
| output_cost_per_token: 0.00003, | ||
| }], | ||
| ]); | ||
| fetcher.cachedPricing = mockPricing; | ||
|
|
||
| const result = await Result.unwrap(fetcher.getModelContextLimit('test-model')); | ||
| expect(result).toBeNull(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test code accesses private member cachedPricing
The test code directly accesses the private cachedPricing member, which violates encapsulation and causes TypeScript errors. Consider exposing this functionality through a proper testing interface or using a test-specific subclass.
Apply this fix to use proper encapsulation:
describe('getModelContextLimit', () => {
it('should return context limit from pricing data when available', async () => {
- using fetcher = new PricingFetcher(true); // Use offline mode with cached data
-
- // Mock pricing data with context limits
- const mockPricing = new Map([
- ['test-model', {
- input_cost_per_token: 0.00001,
- output_cost_per_token: 0.00003,
- max_input_tokens: 100_000,
- }],
- ]);
- fetcher.cachedPricing = mockPricing;
+ // Create a test subclass to expose the private member for testing
+ class TestPricingFetcher extends PricingFetcher {
+ setCachedPricing(pricing: Map<string, ModelPricing>) {
+ this.cachedPricing = pricing;
+ }
+ }
+
+ using fetcher = new TestPricingFetcher(true); // Use offline mode with cached data
+
+ // Mock pricing data with context limits
+ const mockPricing = new Map([
+ ['test-model', {
+ input_cost_per_token: 0.00001,
+ output_cost_per_token: 0.00003,
+ max_input_tokens: 100_000,
+ }],
+ ]);
+ fetcher.setCachedPricing(mockPricing);
const contextLimit = await Result.unwrap(fetcher.getModelContextLimit('test-model'));
expect(contextLimit).toBe(100_000);
});
it('should return null when model not found', async () => {
- using fetcher = new PricingFetcher(true);
- fetcher.cachedPricing = new Map(); // Empty cache
+ class TestPricingFetcher extends PricingFetcher {
+ setCachedPricing(pricing: Map<string, ModelPricing>) {
+ this.cachedPricing = pricing;
+ }
+ }
+
+ using fetcher = new TestPricingFetcher(true);
+ fetcher.setCachedPricing(new Map()); // Empty cache
const result = await Result.unwrap(fetcher.getModelContextLimit('unknown-model'));
expect(result).toBeNull();
});
it('should return null when context fields are not available', async () => {
- using fetcher = new PricingFetcher(true);
+ class TestPricingFetcher extends PricingFetcher {
+ setCachedPricing(pricing: Map<string, ModelPricing>) {
+ this.cachedPricing = pricing;
+ }
+ }
+
+ using fetcher = new TestPricingFetcher(true);
// Mock pricing data without context limits
const mockPricing = new Map([
['test-model', {
input_cost_per_token: 0.00001,
output_cost_per_token: 0.00003,
}],
]);
- fetcher.cachedPricing = mockPricing;
+ fetcher.setCachedPricing(mockPricing);
const result = await Result.unwrap(fetcher.getModelContextLimit('test-model'));
expect(result).toBeNull();
});
});Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 ESLint
[error] 658-658: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 658-658: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 658-658: Unsafe member access .unwrap on an error typed value.
(ts/no-unsafe-member-access)
[error] 666-666: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 666-666: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 666-666: Unsafe member access .unwrap on an error typed value.
(ts/no-unsafe-member-access)
[error] 682-682: Unsafe assignment of an error typed value.
(ts/no-unsafe-assignment)
[error] 682-682: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 682-682: Unsafe member access .unwrap on an error typed value.
(ts/no-unsafe-member-access)
🪛 GitHub Actions: CI
[error] 656-656: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.
[error] 664-664: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.
[error] 680-680: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.
🤖 Prompt for AI Agents
In src/pricing-fetcher.ts around lines 644 to 685 the tests directly read/write
the private cachedPricing field; add a small test-friendly API instead:
implement a public method setCachedPricing(pricing: Map<string,
PricingEntryType>) (or a constructor overload) on PricingFetcher that assigns
the passed map to the private cachedPricing field, keep it typed and document it
as test-only (or guard usage with NODE_ENV==='test' if desired), then update the
tests to call fetcher.setCachedPricing(...) instead of touching cachedPricing
directly.
| { | ||
| "session_id": "test-session-sonnet41", | ||
| "transcript_path": "test/test-transcript.jsonl", | ||
| "cwd": "/Users/test/project", | ||
| "model": { | ||
| "id": "claude-sonnet-4-1-20250805", | ||
| "display_name": "Sonnet 4.1" | ||
| }, | ||
| "workspace": { | ||
| "current_dir": "/Users/test/project", | ||
| "project_dir": "/Users/test/project" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fixture looks good; confirm model ID exists in LiteLLM pricing
The structure aligns with the other statusline fixtures and the PR’s dynamic context-limit plumbing. Please verify that claude-sonnet-4-1-20250805 exactly matches a model present in the LiteLLM pricing database so dynamic context limits resolve instead of falling back to 200K.
Run this script to sanity-check local references and fixture presence:
If you want to validate against the latest LiteLLM pricing list, please run this web query:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Checking fixture files exist..."
fd -a 'statusline-test-*.json' test | sort
echo "Searching for model id references..."
rg -n -F "claude-sonnet-4-1-20250805" -C2 || true
echo "Confirm statusline command supports --offline..."
rg -n -C2 -F "--offline" src | sed -n '1,200p' || trueLength of output: 1179
🌐 Web query:
Does LiteLLM’s pricing database include models named "claude-sonnet-4-1-20250805" and "claude-opus-4-1-20250805"? If yes, what are their max_input_tokens / max_tokens context limits?
💡 Result:
Short answer
-
claude-opus-4-1-20250805 — Yes. LiteLLM’s model/cost map includes Opus 4.1 (Bedrock mapping shown in their release notes). Its context limits: 200,000 input-token context window and a 32,000-token maximum output. (docs.litellm.ai, docs.anthropic.com)
-
claude-sonnet-4-1-20250805 — No. Anthropic’s Sonnet 4 is published as claude-sonnet-4-20250514 (not a 4-1 build with the 20250805 tag), and LiteLLM documents and release notes reference the Sonnet 4 (20250514) entry instead. If you mean claude-sonnet-4-20250514, LiteLLM does include Sonnet 4 and its limits are: 200,000 input-token context window and a 64,000-token maximum output. (docs.litellm.ai, docs.anthropic.com)
If you’d like, I can fetch the exact entries/lines from LiteLLM’s model_prices_and_context_window.json and paste the JSON fields for those models.
Citations:
- 1: https://docs.litellm.ai/release_notes/v1-75-5?utm_source=chatgpt.com
- 2: https://docs.anthropic.com/en/docs/about-claude/models/all-models?utm_source=chatgpt.com
- 3: https://docs.litellm.ai/docs/providers/anthropic?utm_source=chatgpt.com
- 4: https://docs.anthropic.com/en/docs/about-claude/models/all-models?utm_source=chatgpt.com
Model ID mismatch — update fixture or add LiteLLM mapping
Quick summary: the fixture file does contain "claude-sonnet-4-1-20250805", but a LiteLLM check shows that exact model id is not present in their pricing DB. LiteLLM lists Sonnet 4 as "claude-sonnet-4-20250514" (context: 200,000 input / 64,000 output). Opus 4.1 is present as "claude-opus-4-1-20250805" (200,000 / 32,000). If the exact id in the fixture is not in LiteLLM’s model map, the dynamic context lookup may not resolve and could fall back.
Files to fix / inspect:
- test/statusline-test-sonnet41.json — model.id: "claude-sonnet-4-1-20250805" (line ~6)
Recommended fix (pick one):
- Replace the fixture model id with the exact LiteLLM entry: "claude-sonnet-4-20250514", OR
- Add a mapping for "claude-sonnet-4-1-20250805" to LiteLLM’s model_prices_and_context_window.json with the correct context limits (suggested: input 200,000 / output 64,000), so the dynamic lookup resolves.
Updated verification script (corrects the literal "--offline" search):
#!/bin/bash
set -euo pipefail
echo "Checking fixture files exist..."
fd -a 'statusline-test-*.json' test | sort
echo "Searching for model id references..."
rg -n -F "claude-sonnet-4-1-20250805" -C2 || true
echo "Confirm statusline command supports --offline (literal search)"
# use `--` so ripgrep treats the next arg as the pattern, not an option
rg -n -C2 -F -- "--offline" src | sed -n '1,200p' || trueIf you want, I can fetch the exact LiteLLM JSON entries (model_prices_and_context_window.json) for the Sonnet/Opus models and paste the fields here.
Remove test logic from statusline.ts as it should not contain tests - Tests belong in data-loader.ts or separate test files - Commands should focus on execution logic only - Maintains clean separation of concerns
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
src/commands/statusline.ts (1)
166-191: Resolve ts/no-unsafe- and avoid shadowing; locally narrow types for calculateContextTokens() call*ESLint is flagging multiple ts/no-unsafe-* violations on Line 169 due to:
- Accessing hookData fields without a narrowed type in this scope
- Accessing ctx.values.offline where ctx is untyped in this command’s run() signature
Additionally, the map callback parameter named ctx shadows the outer command context, hurting readability.Locally narrow the types we need, coerce offline to a boolean, and avoid the shadowed identifier.
Apply this diff:
// Calculate context tokens from transcript with model-specific limits - const contextInfo = await Result.pipe( + // Narrow types locally to satisfy eslint ts/no-unsafe-* without impacting outer scope + type MinimalHookData = { transcript_path: string; model: { id?: string } }; + const { transcript_path, model } = hookData as MinimalHookData; + const offline = Boolean((ctx as { values?: { offline?: unknown } }).values?.offline); + + const contextInfo = await Result.pipe( Result.try({ - try: calculateContextTokens(hookData.transcript_path, hookData.model.id, ctx.values.offline), + try: calculateContextTokens(transcript_path, model.id, offline), catch: error => error, }), Result.inspectError(error => logger.debug(`Failed to calculate context tokens: ${error instanceof Error ? error.message : String(error)}`)), - Result.map((ctx) => { - if (ctx == null) { + Result.map((info) => { + if (info == null) { return undefined; } // Format context percentage with color coding using configurable thresholds const thresholds = getContextUsageThresholds(); - const color = ctx.percentage < thresholds.LOW + const color = info.percentage < thresholds.LOW ? pc.green - : ctx.percentage < thresholds.MEDIUM + : info.percentage < thresholds.MEDIUM ? pc.yellow : pc.red; - const coloredPercentage = color(`${ctx.percentage}%`); + const coloredPercentage = color(`${info.percentage}%`); // Format token count with thousand separators - const tokenDisplay = ctx.inputTokens.toLocaleString(); + const tokenDisplay = info.inputTokens.toLocaleString(); return `${tokenDisplay} (${coloredPercentage})`; }), Result.unwrap(undefined), );
🧹 Nitpick comments (1)
src/commands/statusline.ts (1)
48-55: Wrap JSON.parse in Result.try() to match guidelines and avoid hard exits on parse errorsJSON.parse can throw; current code will crash before zod validation can produce a cleaner message. Per the repo’s guidelines, prefer Result.try() for operations that may throw.
Example adjustment (outside the changed hunk; illustrative snippet):
// Read input from stdin const stdin = await getStdin(); if (stdin.length === 0) { log("❌ No input provided"); process.exit(1); } const hookData = await Result.pipe( Result.try({ try: () => JSON.parse(stdin.trim()), catch: err => err, }), Result.map(json => { const parsed = statuslineHookJsonSchema.safeParse(json); return parsed.success ? parsed.data : null; }), Result.inspectError(err => log("❌ Failed to parse input:", err instanceof Error ? err.message : String(err))), Result.unwrap<unknown | null>(null), ); if (hookData == null) { log("❌ Invalid input format"); process.exit(1); }If you want, I can turn this into a precise diff for this file.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/commands/statusline.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.{ts,tsx}: Use tab indentation
Use double quotes for strings
Do not use console.log except where explicitly disabled; use logger.ts instead
File paths must use Node.js path utilities for cross-platform compatibility
Use .ts extensions for local file imports (e.g., import { foo } from './utils.ts')
Prefer @praha/byethrow Result type over traditional try-catch for functional error handling
Use Result.try() to wrap operations that may throw (e.g., JSON parsing)
Use Result.isFailure() and Result.isSuccess() type guards instead of negations
Prefer early return pattern over ternary operators when handling Result
Keep traditional try-catch only for file I/O with complex handling or legacy code that's hard to refactor
For async operations, create a wrapper function with Result.try() and call it
Variables use camelCase (e.g., usageDataSchema, modelBreakdownSchema)
Types use PascalCase (e.g., UsageData, ModelBreakdown)
Constants may use UPPER_SNAKE_CASE (e.g., DEFAULT_CLAUDE_CODE_PATH)
Use in-source testing blocks guarded by if (import.meta.vitest != null); do not create separate test files
Files:
src/commands/statusline.ts
**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
**/*.ts: Only export constants, functions, and types that are used by other modules; keep private values non-exported
Dynamic imports using await import() should only appear within in-source test blocks
Vitest globals (describe, it, expect) are available automatically without imports
Use fs-fixture createFixture() for mock data in tests that simulate Claude data directories
Model names in tests must use current Claude 4 models (Sonnet and Opus) and cover both for test coverage
Model naming must follow claude-{model-type}-{generation}-{date} (e.g., claude-sonnet-4-20250514)
Model names in tests must exactly match LiteLLM pricing database; verify existence before adding new tests
Cost calculations should only target models supported by LiteLLM; unsupported models cannot be calculated
Files:
src/commands/statusline.ts
🧬 Code Graph Analysis (1)
src/commands/statusline.ts (1)
src/data-loader.ts (6)
calculateContextTokens(1412-1488)calculateContextTokens(1414-1472)it(4718-4761)fixture(4739-4749)fixture(4724-4737)fixture(4751-4760)
🪛 ESLint
src/commands/statusline.ts
[error] 167-167: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 167-167: Unsafe member access .pipe on an error typed value.
(ts/no-unsafe-member-access)
[error] 168-168: Unsafe call of a(n) error type typed value.
(ts/no-unsafe-call)
[error] 168-168: Unsafe member access .try on an error typed value.
(ts/no-unsafe-member-access)
[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string.
(ts/no-unsafe-argument)
[error] 169-169: Unsafe member access .transcript_path on an error typed value.
(ts/no-unsafe-member-access)
[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string | undefined.
(ts/no-unsafe-argument)
[error] 169-169: Unsafe member access .model on an error typed value.
(ts/no-unsafe-member-access)
[error] 169-169: Unsafe argument of type any assigned to a parameter of type boolean.
(ts/no-unsafe-argument)
[error] 169-169: Unsafe member access .values on an any value.
(ts/no-unsafe-member-access)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Workers Builds: ccusage-guide
Replace @typescript-eslint/ban-ts-comment with ts/ban-ts-comment - Use shortened ESLint rule notation for consistency - Maintains same functionality with cleaner comments - Follows project style guidelines
Summary
This PR implements dynamic context limits for Claude models, replacing the hardcoded 200K limit with model-specific limits fetched from LiteLLM data.
Key Changes
Technical Implementation
PricingFetcher.getModelContextLimit()- Retrieve model-specific context limitscalculateContextTokens()- Accept modelId parameter for dynamic limit lookupCode Quality Improvements
Testing
New test commands added:
bun run test:statusline:sonnet4- Test with Claude 4 Sonnetbun run test:statusline:opus4- Test with Claude 4.1 Opusbun run test:statusline:sonnet41- Test with Claude 4.1 Sonnetbun run test:statusline:all- Run all model testsQuality Assurance
Backward Compatibility
Implementation Details
Commits Summary (11 commits)
This feature enables accurate, model-specific context usage reporting while maintaining excellent code quality and comprehensive test coverage.