feat(integrations): support Kimi Code CLI as non-interactive LLM backend#1139
feat(integrations): support Kimi Code CLI as non-interactive LLM backend#1139muddlebee merged 14 commits intoTracer-Cloud:mainfrom
Conversation
Greptile SummaryThis PR adds Kimi Code CLI as a new non-interactive LLM backend by introducing Confidence Score: 5/5Safe to merge — all prior P0/P1 findings from earlier review rounds have been addressed; remaining comments are P2 style nits only. No new P0 or P1 issues found. Auth check uses KIMI_API_KEY correctly, parse() raises RuntimeError on empty output (consistent with runner.py's error contract), the hollow test was replaced with one that has real assertions, and the dead tomli fallback was removed. Only minor style issues remain. app/integrations/llm_cli/kimi.py — misleading _ = stderr / _ = returncode suppressions in parse() Important Files Changed
Sequence DiagramsequenceDiagram
participant Caller
participant CLIBackedLLMClient
participant KimiAdapter
participant _check_kimi_auth
participant kimi_CLI
Caller->>CLIBackedLLMClient: invoke(prompt)
CLIBackedLLMClient->>KimiAdapter: detect()
KimiAdapter->>kimi_CLI: kimi --version
kimi_CLI-->>KimiAdapter: version string
KimiAdapter->>_check_kimi_auth: check KIMI_API_KEY / config.toml
_check_kimi_auth-->>KimiAdapter: (logged_in, detail)
KimiAdapter-->>CLIBackedLLMClient: CLIProbe
alt not installed or not logged in
CLIBackedLLMClient-->>Caller: raise RuntimeError
end
CLIBackedLLMClient->>KimiAdapter: build(prompt, model, workspace)
KimiAdapter-->>CLIBackedLLMClient: CLIInvocation(argv, stdin)
CLIBackedLLMClient->>kimi_CLI: subprocess.run(argv, stdin=prompt, env=KIMI_* filtered)
kimi_CLI-->>CLIBackedLLMClient: stdout / returncode
alt returncode != 0
CLIBackedLLMClient->>KimiAdapter: explain_failure()
CLIBackedLLMClient-->>Caller: raise RuntimeError
end
CLIBackedLLMClient->>KimiAdapter: parse(stdout)
alt stdout empty
KimiAdapter-->>Caller: raise RuntimeError
end
KimiAdapter-->>CLIBackedLLMClient: content string
CLIBackedLLMClient-->>Caller: LLMResponse(content)
Reviews (5): Last reviewed commit: "fix(llm-cli): final restoration of Claud..." | Re-trigger Greptile |
|
working on greptile suggestion |
|
@greptile review |
0402c80 to
c8f47e6
Compare
c8f47e6 to
aa7ae9d
Compare
|
@greptile full-review |
aa7ae9d to
b4cbe19
Compare
|
@greptile review |
|
@muddlebee @yashksaini-coder PTAL!! |
yashksaini-coder
left a comment
There was a problem hiding this comment.
Reviewed the full diff, ran quality checks locally, and verified every Greptile-flagged P1/P2 is resolved.
Local results
pytest tests/integrations/llm_cli/— 28/28 passed (Kimi + Codex)ruff check— cleanmypy— clean
Verified P1 fixes
- Auth correctly checks
KIMI_API_KEYenv first, then~/.kimi/config.tomlviatomllib— no longer testing againstOPENAI_API_KEY test_detect_missing_config_not_logged_inhas explicit assertions (probe.installed is True,probe.logged_in is False)tomllibstdlib only — correct since the project requires Python ≥ 3.11parse()raisesRuntimeErroron empty stdout — matches whatrunner.pycatches
Design looks solid
KimiAdapter follows the same duck-typed LLMCLIAdapter protocol as CodexAdapter — no inheritance needed since LLMCLIAdapter is a Protocol. The lazy factory pattern in registry.py avoids circular imports. _SAFE_SUBPROCESS_ENV_PREFIXES extended with "KIMI_" is the correct place for env allowlisting.
Two minor observations (non-blocking)
_fallback_kimi_pathsappends~/.local/binentries that_default_cli_fallback_paths("kimi")already emits on Linux. Harmless sinceresolve_cli_binarydeduplicates internally, but the extra loop is noise.- No test covers the
min_versionenforcement branch (where version string is present but below1.40.0). Worth a follow-up for completeness.
Neither is a blocker. The adapter is correct, tests are meaningful, and it integrates cleanly with the existing LLM CLI plumbing.
Approved.
|
@gitsofaryan |
|
also check other Acceptance criteria, many missing gaps |
|
missing entry for env vars at .env.example refer app/integrations/llm_cli/AGENTS.md for specs |
fccf525 to
fbcac3a
Compare
fbcac3a to
38f6c95
Compare
|
@greptile-apps review |
|
also @gitsofaryan an e2e demo |
|
pls igore my previous reviews. it was outdated. |
| return m.group(1) if m else None | ||
|
|
||
|
|
||
| def _check_kimi_auth() -> tuple[bool | None, str]: |
There was a problem hiding this comment.
Probe Kimi subscription first with kimi login status (mirror codex.py login status). Keep KIMI_API_KEY and config.toml parsing as fallback when that command is absent or the result is inconclusive.
There was a problem hiding this comment.
Implementation sketch:
-
Run subprocess.run([binary_path, "login", "status"], capture_output=True, text=True, timeout=_PROBE_TIMEOUT_SEC) from _probe_binary after --version succeeds so you always have the resolved executable.
-
Add _classify_kimi_login_status(returncode, stdout, stderr) like codex _classify_codex_auth: merge stdout+stderr, check negative phrases before positive ones ("not logged in" before "logged in"), map timeouts/OS errors to logged_in=None.
-
If the subprocess fails or stderr suggests unknown subcommand or generic usage error, treat the CLI probe as unavailable and fall through to trimmed KIMI_API_KEY then config.toml providers.
-
If login status reports not authenticated but KIMI_API_KEY or config.toml still has a key, return logged_in=True with detail naming that source so API-key-only setups stay green while OAuth subscribers get CLI-first signal when status works.
-
Extend tests: mock subprocess ordering version call then login status then fallback paths.
There was a problem hiding this comment.
- Add KimiAdapter implementing LLMCLIAdapter protocol - Implement three-tier auth probing: kimi login status -> KIMI_API_KEY -> config.toml - Add binary detection with version checking (v1.40.0+) - Add comprehensive test coverage (9/9 passing) - Update registry, config, and wizard integration - Add environment variable forwarding for KIMI_* prefixes
…pter and CLIBackedLLMClient Co-authored-by: Copilot <[email protected]>
|
@muddlebee Hey, facing a payment issue with the Kimi API CLI login works but API calls fail with exit code 75 (quota/billing error). Can you help check if the account billing is sorted? Thanks Recording.2026-05-04.165418.mp4 |
|
@muddlebee please suggest fixes if any! |
Resolved conflict in app/integrations/llm_cli/runner.py. PR Tracer-Cloud#1295 extracted the subprocess env allowlist into a new app/integrations/llm_cli/subprocess_env.py module. This branch had modified the inline list in runner.py to add the KIMI_ prefix. Resolution: - runner.py: take origin/main version (imports build_cli_subprocess_env from subprocess_env.py, keeps _build_subprocess_env back-compat alias). - subprocess_env.py: add "KIMI_" to _SAFE_SUBPROCESS_ENV_PREFIXES so Kimi env vars (KIMI_API_KEY, KIMI_BASE_URL, etc.) are forwarded to the subprocess. This preserves both PR Tracer-Cloud#1295's refactor and PR Tracer-Cloud#1139's Kimi support.
|
@gitsofaryan unfortunately team can't provide kimi cli subscription or auth.
and this error ideally shouldn't come |
|
Hey @gitsofaryan, took a look at the merge conflict locally. There's just one conflict in Fix is two steps. Run these from your branch: git fetch origin
git merge origin/main
# Conflict in runner.py — take main's version (the new shape uses subprocess_env.py)
git checkout --theirs app/integrations/llm_cli/runner.py
# Add KIMI_ to the prefix list in the new module
sed -i 's/("LC_", "CODEX_", "CLAUDE_")/("LC_", "CODEX_", "CLAUDE_", "KIMI_")/' app/integrations/llm_cli/subprocess_env.py
# Stage and finish the merge
git add app/integrations/llm_cli/runner.py app/integrations/llm_cli/subprocess_env.py
git commit
git pushThe I verified locally: Separately on the Kimi API quota error (exit code 75) — that's an account-side billing thing, not something the merge fix addresses. Worth checking your Moonshot AI billing dashboard or rotating to an account with credit. Good luck! |
|
@gitsofaryan, ended up pushing the resolution to your branch directly since you've got You can ignore the previous comment with the manual steps. To sync your local clone, just: git fetch origin
git reset --hard origin/feat/kimi-cli-1112 # if you have no local changes ahead
# or
git pull --rebase # if you do have local work to keepSame fix as I described before: took main's |
Main moved while the previous merge was in flight: PR Tracer-Cloud#1149 added Cursor Agent CLI support touching the same prefix tuple, env-var docs, and provider-allowlist as this branch. Resolutions (all "combine both" merges): - subprocess_env.py: prefix tuple now includes both CURSOR_ and KIMI_ - app/config.py: provider early-return now includes both cursor and kimi - .env.example: keep both Cursor and Kimi env-var blocks
|
Heads up — main moved between my first push and now (PR #1149 landing the Cursor Agent CLI added the same shape of changes you did, in the same files). Pushed a second merge commit 4225c28f to clear the new conflicts. PR is now MERGEABLE (state was UNSTABLE last I checked because of CI status, not merge conflicts — worth a glance at any failing checks before you merge). Resolutions in this second merge were all "combine both additions":
If you've made any local changes since this morning, |
|
@muddlebee fixed (utf encoding issue on windows) , im unable to generate kimi API not able to find free/dev API usage. @yashksaini-coder thanks, also addressed the changes. |
|
@gitsofaryan thats not an issue, given the e2e test cases are covered, I;m taking this up, will review and add necessary changes if required :) |
|
assigning to myself |
Resolve conflicts by keeping Kimi CLI integration alongside OpenCode CLI: wizard providers, LLM_PROVIDER literals/registry, subprocess env prefixes, and merged .env.example provider comment.
- Run config/env fallback whenever login status is not confidently True\n (including timeouts and spawn errors).\n- Update AGENTS.md: Kimi/OpenCode rows, Kimi probe notes, KIMI_ prefix.
|
🎊 Achievement unlocked: PR Merged. @gitsofaryan passed code review, survived CI, and shipped. Respect. 🤝 👋 Join us on Discord - OpenSRE : hang out, contribute, or hunt for features and issues. Everyone's welcome. |
|
@muddlebee Thanks man. |




PR Description:
Fixes #1112
Describe the changes you have made in this PR -
Added
KimiAdapterto support using the Kimi Code CLI (kimi -p) as a subprocess-backed LLM provider.app/integrations/llm_cli/kimi.pyimplementingLLMCLIAdapter. It automatically parses auth state by reading~/.kimi/config.tomlor theKIMI_API_KEYenv var.app/integrations/llm_cli/registry.pywithLLM_PROVIDER="kimi"andmodel_env_key="KIMI_MODEL"._SAFE_SUBPROCESS_ENV_PREFIXESinrunner.pywith"KIMI_"to properly forward vendor env vars.KIMI_MODELSinapp/cli/wizard/config.pyfor onboarding.tests/integrations/llm_cli/test_kimi_adapter.py.Demo/Screenshot for feature changes and bug fixes -
Code Understanding and AI Usage
Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?
If you used AI assistance, explain exactly how you used it:
Used DeepMind Antigravity to quickly spike the Kimi Code CLI (
kimi) behavior, identify headless execution options (kimi --print --input-format text --output-format text --final-message-only --yolo), implement the adapter according to the internal AGENTS.md contract, and verify behavior with tests.Explain your implementation approach:
The
KimiAdapterinherits the subprocess environment fromrunner.py. Thedetect()mechanism evaluates Kimi CLI version viakimi --versionand checks for valid offline authentication parsing the~/.kimi/config.tomlviatomllib, ensuring we don't consume billable LLM tokens during probes.build()targets the documented headless integration (--print,--input-format text,--output-format text,--final-message-only,--yolo) using standard inputs.