Skip to content

feat(integrations): support Kimi Code CLI as non-interactive LLM backend#1139

Merged
muddlebee merged 14 commits intoTracer-Cloud:mainfrom
gitsofaryan:feat/kimi-cli-1112
May 5, 2026
Merged

feat(integrations): support Kimi Code CLI as non-interactive LLM backend#1139
muddlebee merged 14 commits intoTracer-Cloud:mainfrom
gitsofaryan:feat/kimi-cli-1112

Conversation

@gitsofaryan
Copy link
Copy Markdown
Contributor

PR Description:
Fixes #1112

Describe the changes you have made in this PR -

Added KimiAdapter to support using the Kimi Code CLI (kimi -p) as a subprocess-backed LLM provider.

  • Created app/integrations/llm_cli/kimi.py implementing LLMCLIAdapter. It automatically parses auth state by reading ~/.kimi/config.toml or the KIMI_API_KEY env var.
  • Registered the adapter in app/integrations/llm_cli/registry.py with LLM_PROVIDER="kimi" and model_env_key="KIMI_MODEL".
  • Extended the _SAFE_SUBPROCESS_ENV_PREFIXES in runner.py with "KIMI_" to properly forward vendor env vars.
  • Registered the provider and configured KIMI_MODELS in app/cli/wizard/config.py for onboarding.
  • Wrote comprehensive unit tests in tests/integrations/llm_cli/test_kimi_adapter.py.

Demo/Screenshot for feature changes and bug fixes -

pytest tests/integrations/llm_cli/test_kimi_adapter.py
============================= test session starts =============================
...
tests/integrations/llm_cli/test_kimi_adapter.py::test_detect_path_binary_logged_in_env PASSED [ 14%]
tests/integrations/llm_cli/test_kimi_adapter.py::test_detect_not_logged_in PASSED [ 28%]
tests/integrations/llm_cli/test_kimi_adapter.py::test_detect_missing_config_not_logged_in PASSED [ 42%]
tests/integrations/llm_cli/test_kimi_adapter.py::test_build_adds_model_flag_and_yolo PASSED [ 57%]
tests/integrations/llm_cli/test_kimi_adapter.py::test_kimi_cli_registry_entry PASSED [ 71%]
tests/integrations/llm_cli/test_cli_backed_client_invoke_forwards_kimi_env PASSED [ 85%]
tests/integrations/llm_cli/test_parse_and_explain_failure PASSED [100%]

============================== 7 passed in 0.23s ==============================

Code Understanding and AI Usage

Did you use AI assistance (ChatGPT, Claude, Copilot, etc.) to write any part of this code?

  • No, I wrote all the code myself
  • Yes, I used AI assistance

If you used AI assistance, explain exactly how you used it:
Used DeepMind Antigravity to quickly spike the Kimi Code CLI (kimi) behavior, identify headless execution options (kimi --print --input-format text --output-format text --final-message-only --yolo), implement the adapter according to the internal AGENTS.md contract, and verify behavior with tests.

Explain your implementation approach:
The KimiAdapter inherits the subprocess environment from runner.py. The detect() mechanism evaluates Kimi CLI version via kimi --version and checks for valid offline authentication parsing the ~/.kimi/config.toml via tomllib, ensuring we don't consume billable LLM tokens during probes. build() targets the documented headless integration (--print, --input-format text, --output-format text, --final-message-only, --yolo) using standard inputs.

Comment thread tests/integrations/llm_cli/test_kimi_adapter.py Fixed
Comment thread tests/integrations/llm_cli/test_kimi_adapter.py Fixed
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 30, 2026

Greptile Summary

This PR adds Kimi Code CLI as a new non-interactive LLM backend by introducing KimiAdapter, registering it in the provider registry and wizard config, forwarding KIMI_* env vars through the subprocess sanitizer, and shipping 7 unit tests. All previous review concerns (false auth on OPENAI_API_KEY, hollow no-assertion test, dead tomli fallback, silent empty-parse, and ValueError vs RuntimeError contract mismatch) appear to be fully addressed in this revision.

Confidence Score: 5/5

Safe to merge — all prior P0/P1 findings from earlier review rounds have been addressed; remaining comments are P2 style nits only.

No new P0 or P1 issues found. Auth check uses KIMI_API_KEY correctly, parse() raises RuntimeError on empty output (consistent with runner.py's error contract), the hollow test was replaced with one that has real assertions, and the dead tomli fallback was removed. Only minor style issues remain.

app/integrations/llm_cli/kimi.py — misleading _ = stderr / _ = returncode suppressions in parse()

Important Files Changed

Filename Overview
app/integrations/llm_cli/kimi.py New KimiAdapter implementing LLMCLIAdapter contract; auth check correctly uses KIMI_API_KEY (not OPENAI_API_KEY); parse() now raises RuntimeError on empty output; minor: _ = stderr/returncode suppressions are misleading since both are used in explain_failure()
app/integrations/llm_cli/runner.py Adds "KIMI_" to SAFE_SUBPROCESS_ENV_PREFIXES so KIMI* env vars are forwarded to the subprocess; minor re-ordering of USER/LOGNAME in the key set (functionally identical frozenset)
app/integrations/llm_cli/registry.py Registers "kimi" in CLI_PROVIDER_REGISTRY with lazy-imported KimiAdapter factory and KIMI_MODEL env key; straightforward and consistent with existing codex/claude-code entries
app/config.py Adds "kimi" to LLMProvider Literal and valid_providers list, and skips API-key validation for kimi; inline comment drops "claude-code" from the explanation but logic is correct
tests/integrations/llm_cli/test_kimi_adapter.py Seven tests covering detect/build/parse/registry/env-forwarding; previously hollow test_detect_not_logged_in replaced with test_detect_missing_config_not_logged_in which has real assertions; env-forwarding test correctly verifies OPENAI_API_KEY is excluded
.env.example Documents KIMI_MODEL, KIMI_BIN, KIMI_API_KEY, and the overridable KIMI_SHARE_DIR; consistent with how CODEX_* and CLAUDE_CODE_* vars are documented
app/cli/wizard/config.py Adds KIMI_MODELS and ProviderOption for kimi in SUPPORTED_PROVIDERS; consistent with other CLI provider entries

Sequence Diagram

sequenceDiagram
    participant Caller
    participant CLIBackedLLMClient
    participant KimiAdapter
    participant _check_kimi_auth
    participant kimi_CLI

    Caller->>CLIBackedLLMClient: invoke(prompt)
    CLIBackedLLMClient->>KimiAdapter: detect()
    KimiAdapter->>kimi_CLI: kimi --version
    kimi_CLI-->>KimiAdapter: version string
    KimiAdapter->>_check_kimi_auth: check KIMI_API_KEY / config.toml
    _check_kimi_auth-->>KimiAdapter: (logged_in, detail)
    KimiAdapter-->>CLIBackedLLMClient: CLIProbe
    alt not installed or not logged in
        CLIBackedLLMClient-->>Caller: raise RuntimeError
    end
    CLIBackedLLMClient->>KimiAdapter: build(prompt, model, workspace)
    KimiAdapter-->>CLIBackedLLMClient: CLIInvocation(argv, stdin)
    CLIBackedLLMClient->>kimi_CLI: subprocess.run(argv, stdin=prompt, env=KIMI_* filtered)
    kimi_CLI-->>CLIBackedLLMClient: stdout / returncode
    alt returncode != 0
        CLIBackedLLMClient->>KimiAdapter: explain_failure()
        CLIBackedLLMClient-->>Caller: raise RuntimeError
    end
    CLIBackedLLMClient->>KimiAdapter: parse(stdout)
    alt stdout empty
        KimiAdapter-->>Caller: raise RuntimeError
    end
    KimiAdapter-->>CLIBackedLLMClient: content string
    CLIBackedLLMClient-->>Caller: LLMResponse(content)
Loading

Reviews (5): Last reviewed commit: "fix(llm-cli): final restoration of Claud..." | Re-trigger Greptile

Comment thread app/integrations/llm_cli/kimi.py
Comment thread tests/integrations/llm_cli/test_kimi_adapter.py
Comment thread app/integrations/llm_cli/kimi.py Outdated
Comment thread app/integrations/llm_cli/kimi.py
@gitsofaryan
Copy link
Copy Markdown
Contributor Author

working on greptile suggestion

@gitsofaryan
Copy link
Copy Markdown
Contributor Author

gitsofaryan commented Apr 30, 2026

@greptile review

@gitsofaryan gitsofaryan force-pushed the feat/kimi-cli-1112 branch 4 times, most recently from 0402c80 to c8f47e6 Compare April 30, 2026 12:05
Comment thread app/integrations/llm_cli/kimi.py
@gitsofaryan
Copy link
Copy Markdown
Contributor Author

@greptile full-review

@gitsofaryan
Copy link
Copy Markdown
Contributor Author

@greptile review

@gitsofaryan
Copy link
Copy Markdown
Contributor Author

@muddlebee @yashksaini-coder PTAL!!

Copy link
Copy Markdown
Collaborator

@yashksaini-coder yashksaini-coder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed the full diff, ran quality checks locally, and verified every Greptile-flagged P1/P2 is resolved.

Local results

  • pytest tests/integrations/llm_cli/ — 28/28 passed (Kimi + Codex)
  • ruff check — clean
  • mypy — clean

Verified P1 fixes

  • Auth correctly checks KIMI_API_KEY env first, then ~/.kimi/config.toml via tomllib — no longer testing against OPENAI_API_KEY
  • test_detect_missing_config_not_logged_in has explicit assertions (probe.installed is True, probe.logged_in is False)
  • tomllib stdlib only — correct since the project requires Python ≥ 3.11
  • parse() raises RuntimeError on empty stdout — matches what runner.py catches

Design looks solid
KimiAdapter follows the same duck-typed LLMCLIAdapter protocol as CodexAdapter — no inheritance needed since LLMCLIAdapter is a Protocol. The lazy factory pattern in registry.py avoids circular imports. _SAFE_SUBPROCESS_ENV_PREFIXES extended with "KIMI_" is the correct place for env allowlisting.

Two minor observations (non-blocking)

  1. _fallback_kimi_paths appends ~/.local/bin entries that _default_cli_fallback_paths("kimi") already emits on Linux. Harmless since resolve_cli_binary deduplicates internally, but the extra loop is noise.
  2. No test covers the min_version enforcement branch (where version string is present but below 1.40.0). Worth a follow-up for completeness.

Neither is a blocker. The adapter is correct, tests are meaningful, and it integrates cleanly with the existing LLM CLI plumbing.

Approved.

@muddlebee
Copy link
Copy Markdown
Collaborator

@gitsofaryan
image
missing

@muddlebee
Copy link
Copy Markdown
Collaborator

also check other Acceptance criteria, many missing gaps

@muddlebee
Copy link
Copy Markdown
Collaborator

muddlebee commented Apr 30, 2026

missing entry for env vars at .env.example

refer app/integrations/llm_cli/AGENTS.md for specs

@muddlebee muddlebee added llm-cli Subprocess LLM CLI integration (LLMCLIAdapter / registry) invalid This doesn't seem right labels May 3, 2026
@gitsofaryan gitsofaryan force-pushed the feat/kimi-cli-1112 branch from fccf525 to fbcac3a Compare May 3, 2026 08:35
@gitsofaryan gitsofaryan force-pushed the feat/kimi-cli-1112 branch from fbcac3a to 38f6c95 Compare May 3, 2026 08:45
@Devesh36
Copy link
Copy Markdown
Collaborator

Devesh36 commented May 4, 2026

@greptile-apps review

@Devesh36
Copy link
Copy Markdown
Collaborator

Devesh36 commented May 4, 2026

also @gitsofaryan an e2e demo

@muddlebee
Copy link
Copy Markdown
Collaborator

pls igore my previous reviews. it was outdated.

Comment thread app/integrations/llm_cli/kimi.py Outdated
return m.group(1) if m else None


def _check_kimi_auth() -> tuple[bool | None, str]:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probe Kimi subscription first with kimi login status (mirror codex.py login status). Keep KIMI_API_KEY and config.toml parsing as fallback when that command is absent or the result is inconclusive.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation sketch:

  1. Run subprocess.run([binary_path, "login", "status"], capture_output=True, text=True, timeout=_PROBE_TIMEOUT_SEC) from _probe_binary after --version succeeds so you always have the resolved executable.

  2. Add _classify_kimi_login_status(returncode, stdout, stderr) like codex _classify_codex_auth: merge stdout+stderr, check negative phrases before positive ones ("not logged in" before "logged in"), map timeouts/OS errors to logged_in=None.

  3. If the subprocess fails or stderr suggests unknown subcommand or generic usage error, treat the CLI probe as unavailable and fall through to trimmed KIMI_API_KEY then config.toml providers.

  4. If login status reports not authenticated but KIMI_API_KEY or config.toml still has a key, return logged_in=True with detail naming that source so API-key-only setups stay green while OAuth subscribers get CLI-first signal when status works.

  5. Extend tests: mock subprocess ordering version call then login status then fallback paths.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gitsofaryan and others added 5 commits May 4, 2026 16:37
- Add KimiAdapter implementing LLMCLIAdapter protocol
- Implement three-tier auth probing: kimi login status -> KIMI_API_KEY -> config.toml
- Add binary detection with version checking (v1.40.0+)
- Add comprehensive test coverage (9/9 passing)
- Update registry, config, and wizard integration
- Add environment variable forwarding for KIMI_* prefixes
@gitsofaryan
Copy link
Copy Markdown
Contributor Author

@muddlebee Hey, facing a payment issue with the Kimi API CLI login works but API calls fail with exit code 75 (quota/billing error). Can you help check if the account billing is sorted? Thanks

Recording.2026-05-04.165418.mp4

@muddlebee muddlebee added needs-work and removed invalid This doesn't seem right labels May 4, 2026
@gitsofaryan
Copy link
Copy Markdown
Contributor Author

@muddlebee please suggest fixes if any!

Resolved conflict in app/integrations/llm_cli/runner.py.

PR Tracer-Cloud#1295 extracted the subprocess env allowlist into a new
app/integrations/llm_cli/subprocess_env.py module. This branch had
modified the inline list in runner.py to add the KIMI_ prefix.
Resolution:
- runner.py: take origin/main version (imports build_cli_subprocess_env
  from subprocess_env.py, keeps _build_subprocess_env back-compat alias).
- subprocess_env.py: add "KIMI_" to _SAFE_SUBPROCESS_ENV_PREFIXES so
  Kimi env vars (KIMI_API_KEY, KIMI_BASE_URL, etc.) are forwarded to
  the subprocess.

This preserves both PR Tracer-Cloud#1295's refactor and PR Tracer-Cloud#1139's Kimi support.
@muddlebee
Copy link
Copy Markdown
Collaborator

@gitsofaryan unfortunately team can't provide kimi cli subscription or auth.

image

and this error ideally shouldn't come

@yashksaini-coder
Copy link
Copy Markdown
Collaborator

Hey @gitsofaryan, took a look at the merge conflict locally. There's just one conflict in app/integrations/llm_cli/runner.py, caused by PR #1295 (merged yesterday) which extracted the subprocess env allowlist into a new app/integrations/llm_cli/subprocess_env.py module while your branch was modifying the inline list to add KIMI_.

Fix is two steps. Run these from your branch:

git fetch origin
git merge origin/main

# Conflict in runner.py — take main's version (the new shape uses subprocess_env.py)
git checkout --theirs app/integrations/llm_cli/runner.py

# Add KIMI_ to the prefix list in the new module
sed -i 's/("LC_", "CODEX_", "CLAUDE_")/("LC_", "CODEX_", "CLAUDE_", "KIMI_")/' app/integrations/llm_cli/subprocess_env.py

# Stage and finish the merge
git add app/integrations/llm_cli/runner.py app/integrations/llm_cli/subprocess_env.py
git commit
git push

The KIMI_ prefix is what makes KIMI_API_KEY, KIMI_BASE_URL, etc. forward through to the subprocess env, which is what your test at tests/integrations/llm_cli/test_kimi_adapter.py:220-243 already asserts. After the merge, that prefix lives in subprocess_env.py instead of runner.py, but the behavior is identical.

I verified locally: ruff check and ruff format --check are clean after the resolution, no imports break, all your Kimi-specific changes survive the auto-merge of the other files.

Separately on the Kimi API quota error (exit code 75) — that's an account-side billing thing, not something the merge fix addresses. Worth checking your Moonshot AI billing dashboard or rotating to an account with credit. Good luck!

@yashksaini-coder
Copy link
Copy Markdown
Collaborator

@gitsofaryan, ended up pushing the resolution to your branch directly since you've got Allow edits from maintainers enabled — commit a4aa14f5 is the merge with main. The PR should flip from CONFLICTING to MERGEABLE shortly once GitHub re-evaluates it.

You can ignore the previous comment with the manual steps. To sync your local clone, just:

git fetch origin
git reset --hard origin/feat/kimi-cli-1112  # if you have no local changes ahead
# or
git pull --rebase  # if you do have local work to keep

Same fix as I described before: took main's runner.py (since PR #1295 extracted the env list into subprocess_env.py), then added "KIMI_" to the prefix tuple in the new module so your env var threading still works. Lint, format, and the structure of your Kimi tests all check out locally. Sorry for any toes stepped on — happy to revert if you'd rather redo it yourself.

Main moved while the previous merge was in flight: PR Tracer-Cloud#1149 added Cursor
Agent CLI support touching the same prefix tuple, env-var docs, and
provider-allowlist as this branch.

Resolutions (all "combine both" merges):
- subprocess_env.py: prefix tuple now includes both CURSOR_ and KIMI_
- app/config.py: provider early-return now includes both cursor and kimi
- .env.example: keep both Cursor and Kimi env-var blocks
@yashksaini-coder
Copy link
Copy Markdown
Collaborator

Heads up — main moved between my first push and now (PR #1149 landing the Cursor Agent CLI added the same shape of changes you did, in the same files). Pushed a second merge commit 4225c28f to clear the new conflicts. PR is now MERGEABLE (state was UNSTABLE last I checked because of CI status, not merge conflicts — worth a glance at any failing checks before you merge).

Resolutions in this second merge were all "combine both additions":

  • app/integrations/llm_cli/subprocess_env.py — prefix tuple now has both CURSOR_ and KIMI_
  • app/config.py — provider early-return now lists both cursor and kimi
  • .env.example — kept both Cursor and Kimi env-var blocks

If you've made any local changes since this morning, git pull --rebase should be a clean rebase. Otherwise git fetch origin && git reset --hard origin/feat/kimi-cli-1112 to sync.

@gitsofaryan
Copy link
Copy Markdown
Contributor Author

@muddlebee fixed (utf encoding issue on windows) , im unable to generate kimi API not able to find free/dev API usage.
image

@yashksaini-coder thanks, also addressed the changes.

@muddlebee
Copy link
Copy Markdown
Collaborator

@gitsofaryan thats not an issue, given the e2e test cases are covered, I;m taking this up, will review and add necessary changes if required :)

@muddlebee
Copy link
Copy Markdown
Collaborator

assigning to myself

@muddlebee muddlebee self-assigned this May 5, 2026
muddlebee added 3 commits May 5, 2026 22:33
Resolve conflicts by keeping Kimi CLI integration alongside OpenCode CLI:
wizard providers, LLM_PROVIDER literals/registry, subprocess env prefixes,
and merged .env.example provider comment.
- Run config/env fallback whenever login status is not confidently True\n  (including timeouts and spawn errors).\n- Update AGENTS.md: Kimi/OpenCode rows, Kimi probe notes, KIMI_ prefix.
@muddlebee muddlebee merged commit 42f1b41 into Tracer-Cloud:main May 5, 2026
5 checks passed
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

🎊 Achievement unlocked: PR Merged. @gitsofaryan passed code review, survived CI, and shipped. Respect. 🤝


👋 Join us on Discord - OpenSRE : hang out, contribute, or hunt for features and issues. Everyone's welcome.

@gitsofaryan
Copy link
Copy Markdown
Contributor Author

@muddlebee Thanks man.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llm-cli Subprocess LLM CLI integration (LLMCLIAdapter / registry) needs-work

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[CLI] Kimi Code CLI (non-interactive)

5 participants