-
Notifications
You must be signed in to change notification settings - Fork 2.8k
feat: add LongCat-Flash-Thinking-FP8 models to Chutes AI provider #8426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Self-review engaged: grading my own code like a mirror that files bug reports.
42ae920 to
f55e0b3
Compare
Code Review SummaryI've reviewed the changes and identified issues that need to be addressed: Issues Found
|
| expect(model.info).toEqual( | ||
| expect.objectContaining({ | ||
| maxTokens: 32768, | ||
| contextWindow: 200000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test will fail: contextWindow value doesn't match the actual model definition. The test expects 200000 but the model configuration in packages/types/src/providers/chutes.ts defines it as 202752. Update this to match the actual value.
- Updated GLM-4.6-FP8 test to match resolved merge configuration (contextWindow: 202752, detailed description) - Added missing test for GLM-4.6-turbo model with correct configuration - All 25 tests now pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found.
Summary
This PR attempts to address Issue #8425 by adding support for two new models to the Chutes AI provider:
Changes Made
Model Definitions
ChutesModelIdtype unionTest Coverage
Testing
npx vitest run api/providers/__tests__/chutes.spec.ts)pnpm check-types)pnpm lint)Related Issue
Fixes #8425
Feedback and guidance are welcome!
Important
Add
zai-org/GLM-4.6-FP8andmeituan-longcat/LongCat-Flash-Thinking-FP8models to Chutes AI provider with configurations and tests.zai-org/GLM-4.6-FP8andmeituan-longcat/LongCat-Flash-Thinking-FP8toChutesModelIdinchutes.ts.chutesModelswith context windows and descriptions.chutes.spec.tsforzai-org/GLM-4.6-FP8andmeituan-longcat/LongCat-Flash-Thinking-FP8.This description was created by
for c404987. You can customize this summary. It will automatically update as commits are pushed.