Skip to content

Conversation

@ryoppippi
Copy link
Owner

@ryoppippi ryoppippi commented Aug 15, 2025

Summary

This PR implements dynamic context limits for Claude models, replacing the hardcoded 200K limit with model-specific limits fetched from LiteLLM data.

Key Changes

  • Dynamic Context Calculation: Replace fixed CONTEXT_LIMIT constant with model-specific limits
  • LiteLLM Integration: Fetch context limits from LiteLLM pricing data (max_input_tokens or max_tokens)
  • Enhanced Type Safety: Add context limit fields to modelPricingSchema
  • Improved Statusline: Pass model ID to enable dynamic context percentage calculation
  • Comprehensive Testing: Add test fixtures and commands for Claude 4 model variants
  • Code Quality: Remove inappropriate test code from command files, maintain clean separation of concerns

Technical Implementation

  • PricingFetcher.getModelContextLimit() - Retrieve model-specific context limits
  • calculateContextTokens() - Accept modelId parameter for dynamic limit lookup
  • Fallback to 200K limit when model data unavailable for backward compatibility
  • Proper error handling and logging for context limit retrieval

Code Quality Improvements

  • Removed inappropriate test logic from statusline command file
  • Enhanced ESLint compliance with proper TypeScript private property access
  • Maintained separation of concerns - commands focus on execution, tests belong in appropriate modules
  • All quality checks pass: lint, typecheck, and build successful

Testing

New test commands added:

  • bun run test:statusline:sonnet4 - Test with Claude 4 Sonnet
  • bun run test:statusline:opus4 - Test with Claude 4.1 Opus
  • bun run test:statusline:sonnet41 - Test with Claude 4.1 Sonnet
  • bun run test:statusline:all - Run all model tests

Quality Assurance

  • ESLint checks pass
  • TypeScript compilation successful
  • Build process completes without errors
  • Unit tests for getModelContextLimit method
  • Context percentage calculation tests
  • Model-specific test fixtures created
  • Manual testing with different Claude 4 models
  • Backward compatibility verified
  • Clean code architecture maintained

Backward Compatibility

  • Maintains 200K fallback for unknown models
  • No breaking changes to existing APIs
  • All existing functionality preserved

Implementation Details

Commits Summary (11 commits)

  1. Remove hardcoded CONTEXT_LIMIT constant
  2. Add context limit fields to model pricing schema
  3. Add getModelContextLimit method to PricingFetcher
  4. Update statusline to use dynamic context limits
  5. Add Claude 4 model test commands
  6. Add test data files for Claude 4 variants
  7. Add comprehensive unit tests
  8. Update lsmcp memory with feature documentation
  9. Remove inappropriate test code from statusline command
  10. Fix TypeScript private property access in tests
  11. Use shortened ESLint rule names for consistency

This feature enables accurate, model-specific context usage reporting while maintaining excellent code quality and comprehensive test coverage.

Replace fixed 200K context limit with dynamic model-specific limits

- Remove CONTEXT_LIMIT from _consts.ts as it is no longer needed

- Remove corresponding import from data-loader.ts

This prepares for implementing dynamic context limits per model
Add max_tokens, max_input_tokens, and max_output_tokens fields

- These fields will be used to determine model-specific context limits

- Matches LiteLLM pricing data structure

- All fields are optional to maintain backward compatibility
Add method to retrieve model-specific context limits from LiteLLM data

- Returns max_input_tokens or falls back to max_tokens

- Returns null if model not found or no context limit data available

- Uses existing Result pattern for error handling
Pass model ID and offline flag to calculateContextTokens

- Enable model-specific context limit calculation

- Update function call to include modelId from hookData

- Pass offline flag for consistent behavior
Add npm scripts to test statusline with different Claude 4 models

- test:statusline:sonnet4 - Test with Claude 4 Sonnet

- test:statusline:opus4 - Test with Claude 4.1 Opus

- test:statusline:sonnet41 - Test with Claude 4.1 Sonnet

- test:statusline:all - Run all model tests in sequence

All tests use --offline flag for consistent testing
Add test fixtures for different Claude 4 model variants

- statusline-test-sonnet4.json - Claude 4 Sonnet test data

- statusline-test-opus4.json - Claude 4.1 Opus test data

- statusline-test-sonnet41.json - Claude 4.1 Sonnet test data

Each test file contains hookData with 3000 input tokens for testing context percentage calculations
Add comprehensive test coverage for dynamic context limits

- Add getModelContextLimit tests in pricing-fetcher.ts

- Add context percentage calculation tests in statusline.ts

- Test model data presence/absence scenarios

- Verify consistent percentage calculations across Claude 4 models
Update symbol index status memory to reflect recent changes

- Document new getModelContextLimit method

- Record enhanced calculateContextTokens functionality

- Add testing commands for Claude 4 model variants

- Update statistics after symbol index refresh

- Mark feature as fully operational and tested
@coderabbitai
Copy link

coderabbitai bot commented Aug 15, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Warning

Rate limit exceeded

@ryoppippi has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 15 minutes and 42 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 3569b4a and d62807f.

📒 Files selected for processing (1)
  • src/pricing-fetcher.ts (2 hunks)

Walkthrough

Removes a fixed CONTEXT_LIMIT, adds model-specific context limits to pricing schema and PricingFetcher, updates calculateContextTokens and statusline to use model-aware limits (with offline fallback), adds statusline test fixtures/scripts, and documents symbol index onboarding/status.

Changes

Cohort / File(s) Summary
Symbol index docs
​.lsmcp/memories/symbol_index_status.md
New onboarding/status doc documenting symbol index metrics and recent dynamic context limits additions.
NPM scripts for statusline tests
package.json
Adds test:statusline:all and per-model statusline test scripts (:opus4, :sonnet4, :sonnet41); existing test:statusline unchanged.
Constants cleanup
src/_consts.ts
Removes exported CONTEXT_LIMIT constant (previously 200_000).
Pricing schema extensions
src/_types.ts
Adds optional max_tokens, max_input_tokens, max_output_tokens to modelPricingSchema.
Statusline model-aware usage
src/commands/statusline.ts
Passes model.id and offline flag into calculateContextTokens to compute model-specific context usage.
Dynamic context calculation
src/data-loader.ts
calculateContextTokens signature changed to (transcriptPath, modelId?, offline=false) and may return null; fetches model-specific limit via PricingFetcher with 200k fallback; percentage and returned object now use resolved contextLimit.
PricingFetcher API
src/pricing-fetcher.ts
Adds getModelContextLimit(modelName) returning max_input_tokens or max_tokens or null; includes tests for offline/missing cases.
Statusline fixtures
test/statusline-test-sonnet4.json, test/statusline-test-opus4.json, test/statusline-test-sonnet41.json
New JSON fixtures describing statusline test sessions for Sonnet 4, Opus 4.1, Sonnet 4.1 (pointing to shared transcript/workspace).

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Statusline
  participant DataLoader as calculateContextTokens
  participant Pricing as PricingFetcher
  participant PricingData as Pricing Source

  User->>Statusline: run statusline (session with model id, offline?)
  Statusline->>DataLoader: calculateContextTokens(transcriptPath, modelId, offline)
  DataLoader->>Pricing: getModelContextLimit(modelId) (honor offline)
  Pricing->>PricingData: getModelPricing(modelId)
  PricingData-->>Pricing: pricing (may include max_input_tokens/max_tokens)
  Pricing-->>DataLoader: numeric contextLimit | null
  DataLoader-->>Statusline: { inputTokens, percentage, contextLimit } | null
  Statusline-->>User: render statusline with context usage
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

A rabbit counts tokens by moonlit byte,
Hops from constants into model light.
Fetcher listens, limits softly told,
Statusline sings the usage bold.
Tests and fixtures snug in burrow bright. 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/dynamic-context-limits

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ryoppippi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces dynamic context window limits for Claude models, moving away from a fixed 200K token limit. It integrates with LiteLLM to fetch model-specific context limits, ensuring more accurate context usage calculations and improved status line reporting.

Highlights

  • Dynamic Context Calculation: Replaced the hardcoded 200K CONTEXT_LIMIT with model-specific values fetched dynamically.
  • LiteLLM Integration: Implemented fetching of context limits (max_input_tokens or max_tokens) directly from LiteLLM pricing data.
  • Enhanced Type Safety: Updated modelPricingSchema to include new optional fields for max_tokens, max_input_tokens, and max_output_tokens.
  • Improved Statusline: Modified the statusline command to pass the model ID, enabling dynamic context percentage calculation based on the active model's limits.
  • Comprehensive Testing: Added new test fixtures and package.json scripts for various Claude 4 model variants (Sonnet 4, Opus 4.1, Sonnet 4.1) to validate dynamic context limit behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Aug 15, 2025

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Preview URL Updated (UTC)
✅ Deployment successful!
View logs
ccusage-guide d62807f Commit Preview URL

Branch Preview URL
Aug 15 2025, 11:37 AM

@pkg-pr-new
Copy link

pkg-pr-new bot commented Aug 15, 2025

Open in StackBlitz

npm i https://pkg.pr.new/ryoppippi/ccusage@514

commit: d62807f

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively implements dynamic context limits for Claude models by fetching them from LiteLLM data, which is a great improvement over the previous hardcoded value. The changes are well-structured, including updates to data schemas, the pricing fetcher, and the statusline command. The addition of new test fixtures and scripts for different Claude models is also a valuable contribution to ensure the feature works as expected.

My review includes a couple of suggestions to improve code conciseness and maintainability, particularly in one of the new tests and in the handling of asynchronous results. Overall, this is a solid implementation of the new feature.

Comment on lines 218 to 225
// Verify this applies to all model variants
const opusPercentage = Math.round((tokenCount / 200_000) * 100);
const sonnet4Percentage = Math.round((tokenCount / 200_000) * 100);
const sonnet41Percentage = Math.round((tokenCount / 200_000) * 100);

expect(opusPercentage).toBe(2);
expect(sonnet4Percentage).toBe(2);
expect(sonnet41Percentage).toBe(2);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code is redundant. The calculation and assertion on lines 215-216 already verify the expected percentage. The variables opusPercentage, sonnet4Percentage, and sonnet41Percentage all hold the identical value which has already been tested. This block can be removed for conciseness.

Comment on lines +1456 to +1468
using fetcher = new PricingFetcher(offline);
const contextLimitResult = await fetcher.getModelContextLimit(modelId);
if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) {
contextLimit = contextLimitResult.value;
}
else if (Result.isSuccess(contextLimitResult)) {
// Context limit not available for this model in LiteLLM
logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
}
else {
// Error occurred
logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The if/else if/else chain used to handle the Result type can be simplified. Using a Result.pipe with Result.inspect and Result.inspectError would be more idiomatic for the @praha/byethrow library and can make the code more declarative and easier to read.

using fetcher = new PricingFetcher(offline);
await Result.pipe(
	fetcher.getModelContextLimit(modelId),
	Result.inspect((value) => {
		if (value != null) {
			contextLimit = value;
		} else {
			logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
		}
	}),
	Result.inspectError((error) => {
		logger.debug(`Failed to get context limit for model ${modelId}: ${error.message}`);
	}),
);

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (6)
test/statusline-test-sonnet41.json (1)

4-12: Nit: Avoid hardcoded absolute paths in fixtures

Using /Users/test/project is macOS-specific. If the statusline logic ever touches these paths, tests may behave differently on Linux/CI. Consider using a relative test path (e.g., test/project) to keep fixtures portable.

-  "cwd": "/Users/test/project",
+  "cwd": "test/project",
   "model": {
     "id": "claude-sonnet-4-1-20250805",
     "display_name": "Sonnet 4.1"
   },
   "workspace": {
-    "current_dir": "/Users/test/project",
-    "project_dir": "/Users/test/project"
+    "current_dir": "test/project",
+    "project_dir": "test/project"
   }
test/statusline-test-opus4.json (1)

4-12: Nit: Prefer portable paths in test JSON

Same note as the Sonnet 4.1 fixture; consider replacing /Users/test/project with a relative path to improve portability.

-  "cwd": "/Users/test/project",
+  "cwd": "test/project",
   "model": {
     "id": "claude-opus-4-1-20250805",
     "display_name": "Opus 4.1"
   },
   "workspace": {
-    "current_dir": "/Users/test/project",
-    "project_dir": "/Users/test/project"
+    "current_dir": "test/project",
+    "project_dir": "test/project"
   }
test/statusline-test-sonnet4.json (1)

4-12: Nit: Replace absolute paths with relative for cross-platform tests

To avoid OS-specific assumptions, consider using a relative path for cwd/workspace like test/project.

-  "cwd": "/Users/test/project",
+  "cwd": "test/project",
   "model": {
     "id": "claude-sonnet-4-20250514",
     "display_name": "Sonnet 4"
   },
   "workspace": {
-    "current_dir": "/Users/test/project",
-    "project_dir": "/Users/test/project"
+    "current_dir": "test/project",
+    "project_dir": "test/project"
   }
src/_types.ts (2)

132-135: Validate context limits as positive integers

LiteLLM context-window fields represent token counts; constrain them as integers and positive to catch bad data early.

- // Context window limits from LiteLLM data
- max_tokens: z.number().optional(),
- max_input_tokens: z.number().optional(),
- max_output_tokens: z.number().optional(),
+ // Context window limits from LiteLLM data (token counts)
+ max_tokens: z.number().int().positive().optional(),
+ max_input_tokens: z.number().int().positive().optional(),
+ max_output_tokens: z.number().int().positive().optional(),

132-135: Lint hint appears spurious; confirm typecheck/lint pass

Static-analysis hints flag z.number().optional() as unsafe here, but the same pattern is used above (lines 128–131) without issues. Likely a false positive. Please re-run typecheck/lint to confirm:

  • bun typecheck
  • bun run lint

If it persists, we can switch to const TokenCount = z.number().int().positive(); and reuse it to appease the linter:

+const TokenCount = z.number().int().positive();
 export const modelPricingSchema = z.object({
   input_cost_per_token: z.number().optional(),
   output_cost_per_token: z.number().optional(),
   cache_creation_input_token_cost: z.number().optional(),
   cache_read_input_token_cost: z.number().optional(),
   // Context window limits from LiteLLM data (token counts)
-  max_tokens: z.number().optional(),
-  max_input_tokens: z.number().optional(),
-  max_output_tokens: z.number().optional(),
+  max_tokens: TokenCount.optional(),
+  max_input_tokens: TokenCount.optional(),
+  max_output_tokens: TokenCount.optional(),
 });
src/data-loader.ts (1)

1438-1481: Consider simplifying error messages for clarity

The error catch blocks now say "Skip malformed JSON lines" but the actual issue could be any JSON parsing error, not just malformed structure. Consider keeping the original simpler message.

 } catch {
-  continue; // Skip malformed JSON lines
+  continue; // Skip invalid JSON lines
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ba87e3c and db0f566.

📒 Files selected for processing (10)
  • .lsmcp/memories/symbol_index_status.md (1 hunks)
  • package.json (1 hunks)
  • src/_consts.ts (0 hunks)
  • src/_types.ts (1 hunks)
  • src/commands/statusline.ts (2 hunks)
  • src/data-loader.ts (4 hunks)
  • src/pricing-fetcher.ts (2 hunks)
  • test/statusline-test-opus4.json (1 hunks)
  • test/statusline-test-sonnet4.json (1 hunks)
  • test/statusline-test-sonnet41.json (1 hunks)
💤 Files with no reviewable changes (1)
  • src/_consts.ts
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.{ts,tsx}: Use tab indentation
Use double quotes for strings
Do not use console.log except where explicitly disabled; use logger.ts instead
File paths must use Node.js path utilities for cross-platform compatibility
Use .ts extensions for local file imports (e.g., import { foo } from './utils.ts')
Prefer @praha/byethrow Result type over traditional try-catch for functional error handling
Use Result.try() to wrap operations that may throw (e.g., JSON parsing)
Use Result.isFailure() and Result.isSuccess() type guards instead of negations
Prefer early return pattern over ternary operators when handling Result
Keep traditional try-catch only for file I/O with complex handling or legacy code that's hard to refactor
For async operations, create a wrapper function with Result.try() and call it
Variables use camelCase (e.g., usageDataSchema, modelBreakdownSchema)
Types use PascalCase (e.g., UsageData, ModelBreakdown)
Constants may use UPPER_SNAKE_CASE (e.g., DEFAULT_CLAUDE_CODE_PATH)
Use in-source testing blocks guarded by if (import.meta.vitest != null); do not create separate test files

Files:

  • src/pricing-fetcher.ts
  • src/_types.ts
  • src/commands/statusline.ts
  • src/data-loader.ts
**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.ts: Only export constants, functions, and types that are used by other modules; keep private values non-exported
Dynamic imports using await import() should only appear within in-source test blocks
Vitest globals (describe, it, expect) are available automatically without imports
Use fs-fixture createFixture() for mock data in tests that simulate Claude data directories
Model names in tests must use current Claude 4 models (Sonnet and Opus) and cover both for test coverage
Model naming must follow claude-{model-type}-{generation}-{date} (e.g., claude-sonnet-4-20250514)
Model names in tests must exactly match LiteLLM pricing database; verify existence before adding new tests
Cost calculations should only target models supported by LiteLLM; unsupported models cannot be calculated

Files:

  • src/pricing-fetcher.ts
  • src/_types.ts
  • src/commands/statusline.ts
  • src/data-loader.ts
src/**/_*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Internal files should use underscore prefix (e.g., _types.ts, _utils.ts, _consts.ts)

Files:

  • src/_types.ts
package.json

📄 CodeRabbit Inference Engine (CLAUDE.md)

Dependencies should be added as devDependencies unless explicitly required otherwise

Files:

  • package.json
🧠 Learnings (3)
📚 Learning: 2025-08-13T12:07:22.065Z
Learnt from: CR
PR: ryoppippi/ccusage#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-13T12:07:22.065Z
Learning: After code changes, always run: bun run format, bun typecheck, bun run test

Applied to files:

  • package.json
📚 Learning: 2025-08-13T12:07:22.065Z
Learnt from: CR
PR: ryoppippi/ccusage#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-13T12:07:22.065Z
Learning: Use indexed symbol tools (search_symbol_from_index, get_document_symbols, get_definitions, find_references) and avoid reading entire files unless necessary

Applied to files:

  • .lsmcp/memories/symbol_index_status.md
📚 Learning: 2025-08-13T12:07:22.065Z
Learnt from: CR
PR: ryoppippi/ccusage#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-13T12:07:22.065Z
Learning: Applies to **/*.ts : Model names in tests must use current Claude 4 models (Sonnet and Opus) and cover both for test coverage

Applied to files:

  • src/commands/statusline.ts
🧬 Code Graph Analysis (3)
test/statusline-test-opus4.json (1)
src/debug.ts (1)
  • fixture (446-475)
src/commands/statusline.ts (1)
src/data-loader.ts (1)
  • calculateContextTokens (1412-1488)
src/data-loader.ts (2)
src/pricing-fetcher.ts (1)
  • PricingFetcher (22-288)
src/logger.ts (1)
  • logger (19-19)
🪛 ESLint
src/pricing-fetcher.ts

[error] 197-197: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 197-197: Unsafe member access .pipe on an error typed value.

(ts/no-unsafe-member-access)


[error] 199-199: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 199-199: Unsafe member access .map on an error typed value.

(ts/no-unsafe-member-access)


[error] 204-204: Unsafe assignment of an any value.

(ts/no-unsafe-assignment)


[error] 204-204: Unsafe member access .max_input_tokens on an any value.

(ts/no-unsafe-member-access)


[error] 204-204: Unsafe member access .max_tokens on an any value.

(ts/no-unsafe-member-access)


[error] 209-209: Unsafe return of a value of type any.

(ts/no-unsafe-return)


[error] 658-658: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 658-658: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 658-658: Unsafe member access .unwrap on an error typed value.

(ts/no-unsafe-member-access)


[error] 666-666: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 666-666: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 666-666: Unsafe member access .unwrap on an error typed value.

(ts/no-unsafe-member-access)


[error] 682-682: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 682-682: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 682-682: Unsafe member access .unwrap on an error typed value.

(ts/no-unsafe-member-access)

src/_types.ts

[error] 133-133: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 133-133: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 133-133: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 133-133: Unsafe member access .number on an error typed value.

(ts/no-unsafe-member-access)


[error] 133-133: Unsafe member access .optional on an error typed value.

(ts/no-unsafe-member-access)


[error] 134-134: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 134-134: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 134-134: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 134-134: Unsafe member access .number on an error typed value.

(ts/no-unsafe-member-access)


[error] 134-134: Unsafe member access .optional on an error typed value.

(ts/no-unsafe-member-access)


[error] 135-135: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 135-135: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 135-135: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 135-135: Unsafe member access .number on an error typed value.

(ts/no-unsafe-member-access)


[error] 135-135: Unsafe member access .optional on an error typed value.

(ts/no-unsafe-member-access)

src/commands/statusline.ts

[error] 167-167: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 167-167: Unsafe member access .pipe on an error typed value.

(ts/no-unsafe-member-access)


[error] 168-168: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 168-168: Unsafe member access .try on an error typed value.

(ts/no-unsafe-member-access)


[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string.

(ts/no-unsafe-argument)


[error] 169-169: Unsafe member access .transcript_path on an error typed value.

(ts/no-unsafe-member-access)


[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string | undefined.

(ts/no-unsafe-argument)


[error] 169-169: Unsafe member access .model on an error typed value.

(ts/no-unsafe-member-access)


[error] 169-169: Unsafe argument of type any assigned to a parameter of type boolean.

(ts/no-unsafe-argument)


[error] 169-169: Unsafe member access .values on an any value.

(ts/no-unsafe-member-access)

src/data-loader.ts

[error] 1457-1457: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 1458-1458: Unexpected any value in conditional. An explicit comparison or type conversion is required.

(ts/strict-boolean-expressions)


[error] 1458-1458: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1458-1458: Unsafe member access .isSuccess on an error typed value.

(ts/no-unsafe-member-access)


[error] 1458-1458: Unsafe member access .value on an error typed value.

(ts/no-unsafe-member-access)


[error] 1459-1459: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 1459-1459: Unsafe member access .value on an error typed value.

(ts/no-unsafe-member-access)


[error] 1461-1461: Unexpected any value in conditional. An explicit comparison or type conversion is required.

(ts/strict-boolean-expressions)


[error] 1461-1461: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1461-1461: Unsafe member access .isSuccess on an error typed value.

(ts/no-unsafe-member-access)


[error] 1463-1463: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1463-1463: Unsafe member access .debug on an error typed value.

(ts/no-unsafe-member-access)


[error] 1467-1467: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1467-1467: Unsafe member access .debug on an error typed value.

(ts/no-unsafe-member-access)


[error] 1467-1467: Unsafe member access .error on an error typed value.

(ts/no-unsafe-member-access)


[error] 1474-1474: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)

🪛 GitHub Actions: CI
src/pricing-fetcher.ts

[error] 656-656: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.


[error] 664-664: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.


[error] 680-680: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.

🪛 LanguageTool
.lsmcp/memories/symbol_index_status.md

[grammar] ~8-~8: There might be a mistake here.
Context: ...oppippi/ghq/github.com/ryoppippi/ccusage Language: TypeScript/JavaScript Pattern ...

(QB_NEW_EN)


[grammar] ~9-~9: There might be a mistake here.
Context: .../ccusage Language: TypeScript/JavaScript Pattern Used: */.{ts,tsx,js,jsx} (aut...

(QB_NEW_EN)


[grammar] ~14-~14: There might be a mistake here.
Context: ...X STATISTICS: - Total files indexed: 34 - Total symbols: 209 - Indexing time: 1.44...

(QB_NEW_EN)


[grammar] ~15-~15: There might be a mistake here.
Context: ...l files indexed: 34 - Total symbols: 209 - Indexing time: 1.44 seconds - Average ti...

(QB_NEW_EN)


[grammar] ~16-~16: There might be a mistake here.
Context: ...mbols: 209 - Indexing time: 1.44 seconds - Average time per file: 41ms KEY SYMBOLS...

(QB_NEW_EN)


[grammar] ~21-~21: There might be a mistake here.
Context: ...ormatRemainingTime, calculateCost, etc.) - Core command handlers in src/commands/ -...

(QB_NEW_EN)


[grammar] ~22-~22: There might be a mistake here.
Context: ...- Core command handlers in src/commands/ - Utility functions in src/_*.ts files -...

(QB_NEW_EN)


[grammar] ~23-~23: There might be a mistake here.
Context: ...- Utility functions in src/_*.ts files - Type definitions and schemas RECENT ADD...

(QB_NEW_EN)


[grammar] ~28-~28: There might be a mistake here.
Context: ...extLimitmethod in PricingFetcher class - EnhancedcalculateContextTokens` functi...

(QB_NEW_EN)


[grammar] ~29-~29: There might be a mistake here.
Context: ...ens` function with model-specific limits - Updated statusline command to support dy...

(QB_NEW_EN)


[grammar] ~30-~30: There might be a mistake here.
Context: ... to support dynamic context calculations - Added context limit fields to modelPrici...

(QB_NEW_EN)


[grammar] ~31-~31: There might be a mistake here.
Context: ...ns, max_input_tokens, max_output_tokens) - Comprehensive test coverage for new func...

(QB_NEW_EN)


[grammar] ~32-~32: There might be a mistake here.
Context: ...sive test coverage for new functionality - Test fixtures for Claude 4 model variant...

(QB_NEW_EN)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Workers Builds: ccusage-guide
🔇 Additional comments (7)
test/statusline-test-opus4.json (1)

1-13: Approve: fixture consistent & Opus 4.1 present in LiteLLM pricing

Verified locally and upstream:

  • Local fixtures: test/statusline-test-opus4.json and test/statusline-test.json contain "id": "claude-opus-4-1-20250805"; test/test-transcript.jsonl exists.
  • LiteLLM pricing includes the model (bedrock/us.anthropic.claude-opus-4-1-20250805‑v1:0) with max_input_tokens=200,000 and max_tokens (max_output_tokens)=32,000 — so getModelContextLimit() should return a non-null value.

No changes required; approving.

test/statusline-test-sonnet4.json (1)

1-13: Solid addition; model ID matches expected naming

This fixture complements the 4.1 variants and aligns with the dynamic context limit changes. No issues spotted.

package.json (2)

56-59: New statusline test scripts are well-scoped and offline-friendly

Scripts cover Sonnet 4, Opus 4.1, and Sonnet 4.1 individually and in aggregate. Using --offline is appropriate for deterministic tests. Looks good.


56-59: Verified: fixtures present & --offline flag correctly wired

All referenced fixtures exist, and the statusline command exposes/passes the offline flag (and model id) down to context/pricing code.

  • Fixtures found: test/statusline-test.json, test/statusline-test-sonnet4.json, test/statusline-test-opus4.json, test/statusline-test-sonnet41.json.
  • package.json scripts (≈ lines 55–59) call the statusline command (model-specific scripts pass --offline).
  • src/commands/statusline.ts — defines the offline arg (default true) and calls:
    • calculateContextTokens(hookData.transcript_path, hookData.model.id, ctx.values.offline) (see occurrences around lines ~33, ~71, ~169).
  • src/data-loader.ts — calculateContextTokens(transcriptPath, modelId?, offline = false) and constructs new PricingFetcher(offline) (see around lines ~1412 and ~1456).

No code changes required. Please proceed with the usual workflow checks (bun run format, bun typecheck, bun run test).

src/commands/statusline.ts (1)

166-169: Context info calculation looks good!

The implementation correctly passes the model ID and offline flag to calculate context tokens with model-specific limits. The error handling is appropriate, logging debug messages when calculation fails.

.lsmcp/memories/symbol_index_status.md (1)

1-57: Documentation looks comprehensive!

The symbol index status document provides excellent documentation of the dynamic context limits feature, including clear descriptions of the new functionality, test coverage details, and testing commands.

src/data-loader.ts (1)

1412-1415: Well-structured API changes!

The updated function signature properly accepts optional modelId and offline parameters with sensible defaults. The return type now includes the contextLimit in the response object, which is excellent for transparency.

Comment on lines +1453 to +1469
// Get context limit from PricingFetcher
let contextLimit = 200_000; // Fallback for when modelId is not provided
if (modelId != null && modelId !== '') {
using fetcher = new PricingFetcher(offline);
const contextLimitResult = await fetcher.getModelContextLimit(modelId);
if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) {
contextLimit = contextLimitResult.value;
}
else if (Result.isSuccess(contextLimitResult)) {
// Context limit not available for this model in LiteLLM
logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
}
else {
// Error occurred
logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix Result handling in calculateContextTokens

The static analysis has flagged several TypeScript safety issues with the Result handling. The main issue is that getModelContextLimit returns a ResultAsync, not a synchronous Result, so you need to await it before checking its status.

Apply this fix to properly handle the async Result:

 // Get context limit from PricingFetcher
 let contextLimit = 200_000; // Fallback for when modelId is not provided
 if (modelId != null && modelId !== '') {
   using fetcher = new PricingFetcher(offline);
-  const contextLimitResult = await fetcher.getModelContextLimit(modelId);
-  if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) {
-    contextLimit = contextLimitResult.value;
-  }
-  else if (Result.isSuccess(contextLimitResult)) {
-    // Context limit not available for this model in LiteLLM
-    logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
-  }
-  else {
-    // Error occurred
-    logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`);
-  }
+  const contextLimitResult = await fetcher.getModelContextLimit(modelId);
+  Result.match(contextLimitResult, {
+    success: (value) => {
+      if (value != null) {
+        contextLimit = value;
+      } else {
+        // Context limit not available for this model in LiteLLM
+        logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
+      }
+    },
+    failure: (error) => {
+      // Error occurred
+      logger.debug(`Failed to get context limit for model ${modelId}: ${error.message}`);
+    }
+  });
 }

Or using the more idiomatic Result.unwrap approach with a default:

 // Get context limit from PricingFetcher
 let contextLimit = 200_000; // Fallback for when modelId is not provided
 if (modelId != null && modelId !== '') {
   using fetcher = new PricingFetcher(offline);
-  const contextLimitResult = await fetcher.getModelContextLimit(modelId);
-  if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) {
-    contextLimit = contextLimitResult.value;
-  }
-  else if (Result.isSuccess(contextLimitResult)) {
-    // Context limit not available for this model in LiteLLM
-    logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
-  }
-  else {
-    // Error occurred
-    logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`);
-  }
+  const result = await Result.unwrap(fetcher.getModelContextLimit(modelId), null);
+  if (result != null) {
+    contextLimit = result;
+  } else {
+    logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
+  }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Get context limit from PricingFetcher
let contextLimit = 200_000; // Fallback for when modelId is not provided
if (modelId != null && modelId !== '') {
using fetcher = new PricingFetcher(offline);
const contextLimitResult = await fetcher.getModelContextLimit(modelId);
if (Result.isSuccess(contextLimitResult) && contextLimitResult.value != null) {
contextLimit = contextLimitResult.value;
}
else if (Result.isSuccess(contextLimitResult)) {
// Context limit not available for this model in LiteLLM
logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
}
else {
// Error occurred
logger.debug(`Failed to get context limit for model ${modelId}: ${contextLimitResult.error.message}`);
}
}
// Get context limit from PricingFetcher
let contextLimit = 200_000; // Fallback for when modelId is not provided
if (modelId != null && modelId !== '') {
using fetcher = new PricingFetcher(offline);
const contextLimitResult = await fetcher.getModelContextLimit(modelId);
Result.match(contextLimitResult, {
success: (value) => {
if (value != null) {
contextLimit = value;
} else {
// Context limit not available for this model in LiteLLM
logger.debug(`No context limit data available for model ${modelId} in LiteLLM`);
}
},
failure: (error) => {
// Error occurred
logger.debug(`Failed to get context limit for model ${modelId}: ${error.message}`);
},
});
}
🧰 Tools
🪛 ESLint

[error] 1457-1457: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 1458-1458: Unexpected any value in conditional. An explicit comparison or type conversion is required.

(ts/strict-boolean-expressions)


[error] 1458-1458: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1458-1458: Unsafe member access .isSuccess on an error typed value.

(ts/no-unsafe-member-access)


[error] 1458-1458: Unsafe member access .value on an error typed value.

(ts/no-unsafe-member-access)


[error] 1459-1459: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 1459-1459: Unsafe member access .value on an error typed value.

(ts/no-unsafe-member-access)


[error] 1461-1461: Unexpected any value in conditional. An explicit comparison or type conversion is required.

(ts/strict-boolean-expressions)


[error] 1461-1461: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1461-1461: Unsafe member access .isSuccess on an error typed value.

(ts/no-unsafe-member-access)


[error] 1463-1463: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1463-1463: Unsafe member access .debug on an error typed value.

(ts/no-unsafe-member-access)


[error] 1467-1467: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 1467-1467: Unsafe member access .debug on an error typed value.

(ts/no-unsafe-member-access)


[error] 1467-1467: Unsafe member access .error on an error typed value.

(ts/no-unsafe-member-access)

🤖 Prompt for AI Agents
In src/data-loader.ts around lines 1453 to 1469, the code calls
fetcher.getModelContextLimit which returns a ResultAsync but is treated
synchronously; change to await the async result (e.g., const contextLimitResult
= await fetcher.getModelContextLimit(modelId)) before checking Result.isSuccess,
then handle the success case by reading contextLimitResult.value, the
empty-success case with a debug log, and the failure case by logging
contextLimitResult.error.message; alternatively, use Result.unwrapOr/default to
get a numeric default contextLimit after awaiting the ResultAsync.

Comment on lines +191 to +212
/**
* Gets context window limit for a specific model from LiteLLM data
* @param modelName - The model name to get context limit for
* @returns The context limit in tokens, or null if not found
*/
async getModelContextLimit(modelName: string): Result.ResultAsync<number | null, Error> {
return Result.pipe(
this.getModelPricing(modelName),
Result.map((pricing) => {
if (pricing == null) {
return null; // Model not found in LiteLLM pricing data
}

const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens;
if (contextLimit == null) {
return null; // No context limit data available for model
}

return contextLimit;
}),
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix TypeScript errors in getModelContextLimit method

The static analysis has flagged several TypeScript safety issues in this method. The main issue is incorrect Result handling - you're using Result.pipe and Result.map but the return type indicates a ResultAsync, which means these operations should be async-aware.

Apply this fix to properly handle the Result types:

 async getModelContextLimit(modelName: string): Result.ResultAsync<number | null, Error> {
   return Result.pipe(
-    this.getModelPricing(modelName),
+    await this.getModelPricing(modelName),
     Result.map((pricing) => {
       if (pricing == null) {
         return null; // Model not found in LiteLLM pricing data
       }
 
       const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens;
       if (contextLimit == null) {
         return null; // No context limit data available for model
       }
 
       return contextLimit;
     }),
   );
 }

Alternatively, since getModelPricing returns a ResultAsync, you could use the async chain directly:

 async getModelContextLimit(modelName: string): Result.ResultAsync<number | null, Error> {
-  return Result.pipe(
-    this.getModelPricing(modelName),
-    Result.map((pricing) => {
-      if (pricing == null) {
-        return null; // Model not found in LiteLLM pricing data
-      }
-
-      const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens;
-      if (contextLimit == null) {
-        return null; // No context limit data available for model
-      }
-
-      return contextLimit;
-    }),
-  );
+  const pricingResult = await this.getModelPricing(modelName);
+  return Result.map(pricingResult, (pricing) => {
+    if (pricing == null) {
+      return null; // Model not found in LiteLLM pricing data
+    }
+
+    const contextLimit = pricing.max_input_tokens ?? pricing.max_tokens;
+    if (contextLimit == null) {
+      return null; // No context limit data available for model
+    }
+
+    return contextLimit;
+  });
 }
🧰 Tools
🪛 ESLint

[error] 197-197: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 197-197: Unsafe member access .pipe on an error typed value.

(ts/no-unsafe-member-access)


[error] 199-199: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 199-199: Unsafe member access .map on an error typed value.

(ts/no-unsafe-member-access)


[error] 204-204: Unsafe assignment of an any value.

(ts/no-unsafe-assignment)


[error] 204-204: Unsafe member access .max_input_tokens on an any value.

(ts/no-unsafe-member-access)


[error] 204-204: Unsafe member access .max_tokens on an any value.

(ts/no-unsafe-member-access)


[error] 209-209: Unsafe return of a value of type any.

(ts/no-unsafe-return)

🤖 Prompt for AI Agents
In src/pricing-fetcher.ts around lines 191 to 212, the method uses
Result.pipe/Result.map on a ResultAsync which is incorrect; replace the pipe
usage with the async-aware chain by returning the ResultAsync from
getModelPricing(modelName) and calling its .map (or equivalent ResultAsync.map)
to transform the pricing to a number|null, keeping the same null checks for
pricing and contextLimit. Ensure the returned value is the ResultAsync from that
chain (not a plain Result), so types align with the declared
Result.ResultAsync<number | null, Error> return type.

Comment on lines 644 to 685
describe('getModelContextLimit', () => {
it('should return context limit from pricing data when available', async () => {
using fetcher = new PricingFetcher(true); // Use offline mode with cached data

// Mock pricing data with context limits
const mockPricing = new Map([
['test-model', {
input_cost_per_token: 0.00001,
output_cost_per_token: 0.00003,
max_input_tokens: 100_000,
}],
]);
fetcher.cachedPricing = mockPricing;

const contextLimit = await Result.unwrap(fetcher.getModelContextLimit('test-model'));
expect(contextLimit).toBe(100_000);
});

it('should return null when model not found', async () => {
using fetcher = new PricingFetcher(true);
fetcher.cachedPricing = new Map(); // Empty cache

const result = await Result.unwrap(fetcher.getModelContextLimit('unknown-model'));
expect(result).toBeNull();
});

it('should return null when context fields are not available', async () => {
using fetcher = new PricingFetcher(true);

// Mock pricing data without context limits
const mockPricing = new Map([
['test-model', {
input_cost_per_token: 0.00001,
output_cost_per_token: 0.00003,
}],
]);
fetcher.cachedPricing = mockPricing;

const result = await Result.unwrap(fetcher.getModelContextLimit('test-model'));
expect(result).toBeNull();
});
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Test code accesses private member cachedPricing

The test code directly accesses the private cachedPricing member, which violates encapsulation and causes TypeScript errors. Consider exposing this functionality through a proper testing interface or using a test-specific subclass.

Apply this fix to use proper encapsulation:

 describe('getModelContextLimit', () => {
   it('should return context limit from pricing data when available', async () => {
-    using fetcher = new PricingFetcher(true); // Use offline mode with cached data
-
-    // Mock pricing data with context limits
-    const mockPricing = new Map([
-      ['test-model', {
-        input_cost_per_token: 0.00001,
-        output_cost_per_token: 0.00003,
-        max_input_tokens: 100_000,
-      }],
-    ]);
-    fetcher.cachedPricing = mockPricing;
+    // Create a test subclass to expose the private member for testing
+    class TestPricingFetcher extends PricingFetcher {
+      setCachedPricing(pricing: Map<string, ModelPricing>) {
+        this.cachedPricing = pricing;
+      }
+    }
+
+    using fetcher = new TestPricingFetcher(true); // Use offline mode with cached data
+
+    // Mock pricing data with context limits
+    const mockPricing = new Map([
+      ['test-model', {
+        input_cost_per_token: 0.00001,
+        output_cost_per_token: 0.00003,
+        max_input_tokens: 100_000,
+      }],
+    ]);
+    fetcher.setCachedPricing(mockPricing);
 
     const contextLimit = await Result.unwrap(fetcher.getModelContextLimit('test-model'));
     expect(contextLimit).toBe(100_000);
   });
 
   it('should return null when model not found', async () => {
-    using fetcher = new PricingFetcher(true);
-    fetcher.cachedPricing = new Map(); // Empty cache
+    class TestPricingFetcher extends PricingFetcher {
+      setCachedPricing(pricing: Map<string, ModelPricing>) {
+        this.cachedPricing = pricing;
+      }
+    }
+    
+    using fetcher = new TestPricingFetcher(true);
+    fetcher.setCachedPricing(new Map()); // Empty cache
 
     const result = await Result.unwrap(fetcher.getModelContextLimit('unknown-model'));
     expect(result).toBeNull();
   });
 
   it('should return null when context fields are not available', async () => {
-    using fetcher = new PricingFetcher(true);
+    class TestPricingFetcher extends PricingFetcher {
+      setCachedPricing(pricing: Map<string, ModelPricing>) {
+        this.cachedPricing = pricing;
+      }
+    }
+    
+    using fetcher = new TestPricingFetcher(true);
 
     // Mock pricing data without context limits
     const mockPricing = new Map([
       ['test-model', {
         input_cost_per_token: 0.00001,
         output_cost_per_token: 0.00003,
       }],
     ]);
-    fetcher.cachedPricing = mockPricing;
+    fetcher.setCachedPricing(mockPricing);
 
     const result = await Result.unwrap(fetcher.getModelContextLimit('test-model'));
     expect(result).toBeNull();
   });
 });

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 ESLint

[error] 658-658: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 658-658: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 658-658: Unsafe member access .unwrap on an error typed value.

(ts/no-unsafe-member-access)


[error] 666-666: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 666-666: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 666-666: Unsafe member access .unwrap on an error typed value.

(ts/no-unsafe-member-access)


[error] 682-682: Unsafe assignment of an error typed value.

(ts/no-unsafe-assignment)


[error] 682-682: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 682-682: Unsafe member access .unwrap on an error typed value.

(ts/no-unsafe-member-access)

🪛 GitHub Actions: CI

[error] 656-656: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.


[error] 664-664: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.


[error] 680-680: bun typecheck: TypeScript error TS2341: Property 'cachedPricing' is private and only accessible within class 'PricingFetcher'.

🤖 Prompt for AI Agents
In src/pricing-fetcher.ts around lines 644 to 685 the tests directly read/write
the private cachedPricing field; add a small test-friendly API instead:
implement a public method setCachedPricing(pricing: Map<string,
PricingEntryType>) (or a constructor overload) on PricingFetcher that assigns
the passed map to the private cachedPricing field, keep it typed and document it
as test-only (or guard usage with NODE_ENV==='test' if desired), then update the
tests to call fetcher.setCachedPricing(...) instead of touching cachedPricing
directly.

Comment on lines +1 to +13
{
"session_id": "test-session-sonnet41",
"transcript_path": "test/test-transcript.jsonl",
"cwd": "/Users/test/project",
"model": {
"id": "claude-sonnet-4-1-20250805",
"display_name": "Sonnet 4.1"
},
"workspace": {
"current_dir": "/Users/test/project",
"project_dir": "/Users/test/project"
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fixture looks good; confirm model ID exists in LiteLLM pricing

The structure aligns with the other statusline fixtures and the PR’s dynamic context-limit plumbing. Please verify that claude-sonnet-4-1-20250805 exactly matches a model present in the LiteLLM pricing database so dynamic context limits resolve instead of falling back to 200K.

Run this script to sanity-check local references and fixture presence:

If you want to validate against the latest LiteLLM pricing list, please run this web query:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Checking fixture files exist..."
fd -a 'statusline-test-*.json' test | sort

echo "Searching for model id references..."
rg -n -F "claude-sonnet-4-1-20250805" -C2 || true

echo "Confirm statusline command supports --offline..."
rg -n -C2 -F "--offline" src | sed -n '1,200p' || true

Length of output: 1179


🌐 Web query:

Does LiteLLM’s pricing database include models named "claude-sonnet-4-1-20250805" and "claude-opus-4-1-20250805"? If yes, what are their max_input_tokens / max_tokens context limits?

💡 Result:

Short answer

  • claude-opus-4-1-20250805 — Yes. LiteLLM’s model/cost map includes Opus 4.1 (Bedrock mapping shown in their release notes). Its context limits: 200,000 input-token context window and a 32,000-token maximum output. (docs.litellm.ai, docs.anthropic.com)

  • claude-sonnet-4-1-20250805 — No. Anthropic’s Sonnet 4 is published as claude-sonnet-4-20250514 (not a 4-1 build with the 20250805 tag), and LiteLLM documents and release notes reference the Sonnet 4 (20250514) entry instead. If you mean claude-sonnet-4-20250514, LiteLLM does include Sonnet 4 and its limits are: 200,000 input-token context window and a 64,000-token maximum output. (docs.litellm.ai, docs.anthropic.com)

If you’d like, I can fetch the exact entries/lines from LiteLLM’s model_prices_and_context_window.json and paste the JSON fields for those models.

Citations:


Model ID mismatch — update fixture or add LiteLLM mapping

Quick summary: the fixture file does contain "claude-sonnet-4-1-20250805", but a LiteLLM check shows that exact model id is not present in their pricing DB. LiteLLM lists Sonnet 4 as "claude-sonnet-4-20250514" (context: 200,000 input / 64,000 output). Opus 4.1 is present as "claude-opus-4-1-20250805" (200,000 / 32,000). If the exact id in the fixture is not in LiteLLM’s model map, the dynamic context lookup may not resolve and could fall back.

Files to fix / inspect:

  • test/statusline-test-sonnet41.json — model.id: "claude-sonnet-4-1-20250805" (line ~6)

Recommended fix (pick one):

  • Replace the fixture model id with the exact LiteLLM entry: "claude-sonnet-4-20250514", OR
  • Add a mapping for "claude-sonnet-4-1-20250805" to LiteLLM’s model_prices_and_context_window.json with the correct context limits (suggested: input 200,000 / output 64,000), so the dynamic lookup resolves.

Updated verification script (corrects the literal "--offline" search):

#!/bin/bash
set -euo pipefail

echo "Checking fixture files exist..."
fd -a 'statusline-test-*.json' test | sort

echo "Searching for model id references..."
rg -n -F "claude-sonnet-4-1-20250805" -C2 || true

echo "Confirm statusline command supports --offline (literal search)"
# use `--` so ripgrep treats the next arg as the pattern, not an option
rg -n -C2 -F -- "--offline" src | sed -n '1,200p' || true

If you want, I can fetch the exact LiteLLM JSON entries (model_prices_and_context_window.json) for the Sonnet/Opus models and paste the fields here.

Remove test logic from statusline.ts as it should not contain tests

- Tests belong in data-loader.ts or separate test files

- Commands should focus on execution logic only

- Maintains clean separation of concerns
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
src/commands/statusline.ts (1)

166-191: Resolve ts/no-unsafe- and avoid shadowing; locally narrow types for calculateContextTokens() call*

ESLint is flagging multiple ts/no-unsafe-* violations on Line 169 due to:

  • Accessing hookData fields without a narrowed type in this scope
  • Accessing ctx.values.offline where ctx is untyped in this command’s run() signature
    Additionally, the map callback parameter named ctx shadows the outer command context, hurting readability.

Locally narrow the types we need, coerce offline to a boolean, and avoid the shadowed identifier.

Apply this diff:

 		// Calculate context tokens from transcript with model-specific limits
-		const contextInfo = await Result.pipe(
+		// Narrow types locally to satisfy eslint ts/no-unsafe-* without impacting outer scope
+		type MinimalHookData = { transcript_path: string; model: { id?: string } };
+		const { transcript_path, model } = hookData as MinimalHookData;
+		const offline = Boolean((ctx as { values?: { offline?: unknown } }).values?.offline);
+
+		const contextInfo = await Result.pipe(
 			Result.try({
-				try: calculateContextTokens(hookData.transcript_path, hookData.model.id, ctx.values.offline),
+				try: calculateContextTokens(transcript_path, model.id, offline),
 				catch: error => error,
 			}),
 			Result.inspectError(error => logger.debug(`Failed to calculate context tokens: ${error instanceof Error ? error.message : String(error)}`)),
-			Result.map((ctx) => {
-				if (ctx == null) {
+			Result.map((info) => {
+				if (info == null) {
 					return undefined;
 				}
 				// Format context percentage with color coding using configurable thresholds
 				const thresholds = getContextUsageThresholds();
-				const color = ctx.percentage < thresholds.LOW
+				const color = info.percentage < thresholds.LOW
 					? pc.green
-					: ctx.percentage < thresholds.MEDIUM
+					: info.percentage < thresholds.MEDIUM
 						? pc.yellow
 						: pc.red;
-				const coloredPercentage = color(`${ctx.percentage}%`);
+				const coloredPercentage = color(`${info.percentage}%`);
 
 				// Format token count with thousand separators
-				const tokenDisplay = ctx.inputTokens.toLocaleString();
+				const tokenDisplay = info.inputTokens.toLocaleString();
 				return `${tokenDisplay} (${coloredPercentage})`;
 			}),
 			Result.unwrap(undefined),
 		);
🧹 Nitpick comments (1)
src/commands/statusline.ts (1)

48-55: Wrap JSON.parse in Result.try() to match guidelines and avoid hard exits on parse errors

JSON.parse can throw; current code will crash before zod validation can produce a cleaner message. Per the repo’s guidelines, prefer Result.try() for operations that may throw.

Example adjustment (outside the changed hunk; illustrative snippet):

// Read input from stdin
const stdin = await getStdin();
if (stdin.length === 0) {
	log("❌ No input provided");
	process.exit(1);
}

const hookData = await Result.pipe(
	Result.try({
		try: () => JSON.parse(stdin.trim()),
		catch: err => err,
	}),
	Result.map(json => {
		const parsed = statuslineHookJsonSchema.safeParse(json);
		return parsed.success ? parsed.data : null;
	}),
	Result.inspectError(err => log("❌ Failed to parse input:", err instanceof Error ? err.message : String(err))),
	Result.unwrap<unknown | null>(null),
);

if (hookData == null) {
	log("❌ Invalid input format");
	process.exit(1);
}

If you want, I can turn this into a precise diff for this file.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between db0f566 and 3569b4a.

📒 Files selected for processing (1)
  • src/commands/statusline.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.{ts,tsx}: Use tab indentation
Use double quotes for strings
Do not use console.log except where explicitly disabled; use logger.ts instead
File paths must use Node.js path utilities for cross-platform compatibility
Use .ts extensions for local file imports (e.g., import { foo } from './utils.ts')
Prefer @praha/byethrow Result type over traditional try-catch for functional error handling
Use Result.try() to wrap operations that may throw (e.g., JSON parsing)
Use Result.isFailure() and Result.isSuccess() type guards instead of negations
Prefer early return pattern over ternary operators when handling Result
Keep traditional try-catch only for file I/O with complex handling or legacy code that's hard to refactor
For async operations, create a wrapper function with Result.try() and call it
Variables use camelCase (e.g., usageDataSchema, modelBreakdownSchema)
Types use PascalCase (e.g., UsageData, ModelBreakdown)
Constants may use UPPER_SNAKE_CASE (e.g., DEFAULT_CLAUDE_CODE_PATH)
Use in-source testing blocks guarded by if (import.meta.vitest != null); do not create separate test files

Files:

  • src/commands/statusline.ts
**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

**/*.ts: Only export constants, functions, and types that are used by other modules; keep private values non-exported
Dynamic imports using await import() should only appear within in-source test blocks
Vitest globals (describe, it, expect) are available automatically without imports
Use fs-fixture createFixture() for mock data in tests that simulate Claude data directories
Model names in tests must use current Claude 4 models (Sonnet and Opus) and cover both for test coverage
Model naming must follow claude-{model-type}-{generation}-{date} (e.g., claude-sonnet-4-20250514)
Model names in tests must exactly match LiteLLM pricing database; verify existence before adding new tests
Cost calculations should only target models supported by LiteLLM; unsupported models cannot be calculated

Files:

  • src/commands/statusline.ts
🧬 Code Graph Analysis (1)
src/commands/statusline.ts (1)
src/data-loader.ts (6)
  • calculateContextTokens (1412-1488)
  • calculateContextTokens (1414-1472)
  • it (4718-4761)
  • fixture (4739-4749)
  • fixture (4724-4737)
  • fixture (4751-4760)
🪛 ESLint
src/commands/statusline.ts

[error] 167-167: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 167-167: Unsafe member access .pipe on an error typed value.

(ts/no-unsafe-member-access)


[error] 168-168: Unsafe call of a(n) error type typed value.

(ts/no-unsafe-call)


[error] 168-168: Unsafe member access .try on an error typed value.

(ts/no-unsafe-member-access)


[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string.

(ts/no-unsafe-argument)


[error] 169-169: Unsafe member access .transcript_path on an error typed value.

(ts/no-unsafe-member-access)


[error] 169-169: Unsafe argument of type error typed assigned to a parameter of type string | undefined.

(ts/no-unsafe-argument)


[error] 169-169: Unsafe member access .model on an error typed value.

(ts/no-unsafe-member-access)


[error] 169-169: Unsafe argument of type any assigned to a parameter of type boolean.

(ts/no-unsafe-argument)


[error] 169-169: Unsafe member access .values on an any value.

(ts/no-unsafe-member-access)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Workers Builds: ccusage-guide

Replace @typescript-eslint/ban-ts-comment with ts/ban-ts-comment

- Use shortened ESLint rule notation for consistency

- Maintains same functionality with cleaner comments

- Follows project style guidelines
@ryoppippi ryoppippi changed the title feat: implement dynamic context limits for Claude models feat: implement dynamic context limits with enhanced code quality Aug 15, 2025
@ryoppippi ryoppippi changed the title feat: implement dynamic context limits with enhanced code quality feat(statusline): implement dynamic context limits with enhanced code quality Aug 15, 2025
@ryoppippi ryoppippi changed the title feat(statusline): implement dynamic context limits with enhanced code quality feat(statusline): fetchning context limit per model dynamically from litellm Aug 15, 2025
@ryoppippi ryoppippi changed the title feat(statusline): fetchning context limit per model dynamically from litellm feat(statusline): fetching context limit per model dynamically from litellm Aug 15, 2025
@ryoppippi ryoppippi merged commit 41d1e3c into main Aug 15, 2025
19 checks passed
@ryoppippi ryoppippi deleted the feature/dynamic-context-limits branch August 15, 2025 11:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants