Skip to content

Conversation

@willkill07
Copy link
Member

@willkill07 willkill07 commented Sep 25, 2025

Description

In #848 I improved the Google ADK structure but mistakenly committed the removal of litellm.

This PR re-adds the original contribution from #726

Closes

By Submitting this PR I confirm:

  • I am familiar with the Contributing Guidelines.
  • We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
    • Any contribution which contains commits that are not Signed-Off will not be accepted.
  • When the PR is ready for review, new or existing tests cover these changes.
  • When the PR is ready for review, the documentation is up to date with these changes.

Summary by CodeRabbit

  • New Features
    • Added LiteLLM as a selectable LLM provider.
    • Introduced configuration options including API key, base URL (api_base), model, temperature, top_p, and seed.
    • Enabled an ADK client endpoint for LiteLLM to use within workflows.
  • Chores
    • Provider auto-registration on import to simplify setup.

@willkill07 willkill07 requested a review from a team as a code owner September 25, 2025 23:06
@coderabbitai
Copy link

coderabbitai bot commented Sep 25, 2025

Walkthrough

Introduces a new LiteLlm provider and model configuration, registers it in the provider registry, and adds an ADK endpoint to construct a LiteLlm client from config. Also updates the LLM registry imports to activate registration via side effects.

Changes

Cohort / File(s) Summary of Changes
LiteLlm provider implementation
src/nat/llm/litellm_llm.py
Added LiteLlmModelConfig (extends base config and mixins) and litellm_model async factory registered via @register_llm_provider, yielding LLMProviderInfo.
ADK LiteLlm endpoint
packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py
Added litellm_adk(litellm_config: LiteLlmModelConfig, _builder: Builder) endpoint creating a LiteLlm instance from config (excluding certain fields) and registering with the ADK wrapper. Public import of LiteLlmModelConfig.
Provider auto-registration
src/nat/llm/register.py
Imported litellm_llm to trigger provider registration via module import side effects.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant ADK as ADK Wrapper
  participant Endpoint as litellm_adk
  participant Config as LiteLlmModelConfig
  participant LiteLlm as LiteLlm Client

  User->>ADK: Request LLM client (LiteLlm)
  ADK->>Endpoint: Invoke with config
  Endpoint->>Config: Validate/serialize (exclude type, max_retries, thinking)
  Endpoint->>LiteLlm: Construct LiteLlm(**config_dump)
  LiteLlm-->>ADK: Client instance
  ADK-->>User: Ready client
  note over Endpoint,LiteLlm: New ADK endpoint and client construction
Loading
sequenceDiagram
  autonumber
  participant Importer as nat.llm.register
  participant Module as litellm_llm
  participant Registry as Provider Registry

  Importer->>Module: import litellm_llm
  Module->>Registry: @register_llm_provider(LiteLlmModelConfig)
  Registry-->>Module: Registration complete
  note over Module,Registry: New provider registered via import side effect
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested labels

external, feature request, non-breaking

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title uses imperative mood (“re-add”), clearly describes the main change of restoring the litellm functionality after it was accidentally removed, and is concise at well under the recommended length. It succinctly conveys the intent of the pull request without extraneous detail.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot added external This issue was filed by someone outside of the NeMo Agent toolkit team feature request New feature or request non-breaking Non-breaking change labels Sep 25, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3f6c917 and c4b7ffc.

📒 Files selected for processing (3)
  • packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py (2 hunks)
  • src/nat/llm/litellm_llm.py (1 hunks)
  • src/nat/llm/register.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{py,yaml,yml}

📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)

**/*.{py,yaml,yml}: Configure response_seq as a list of strings; values cycle per call, and [] yields an empty string.
Configure delay_ms to inject per-call artificial latency in milliseconds for nat_test_llm.

Files:

  • packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py
  • src/nat/llm/register.py
  • src/nat/llm/litellm_llm.py
**/*.py

📄 CodeRabbit inference engine (.cursor/rules/nat-test-llm.mdc)

**/*.py: Programmatic use: create TestLLMConfig(response_seq=[...], delay_ms=...), add with builder.add_llm("", cfg).
When retrieving the test LLM wrapper, use builder.get_llm(name, wrapper_type=LLMFrameworkEnum.) and call the framework’s method (e.g., ainvoke, achat, call).

**/*.py: In code comments/identifiers use NAT abbreviations as specified: nat for API namespace/CLI, nvidia-nat for package name, NAT for env var prefixes; do not use these abbreviations in documentation
Follow PEP 20 and PEP 8; run yapf with column_limit=120; use 4-space indentation; end files with a single trailing newline
Run ruff check --fix as linter (not formatter) using pyproject.toml config; fix warnings unless explicitly ignored
Respect naming: snake_case for functions/variables, PascalCase for classes, UPPER_CASE for constants
Treat pyright warnings as errors during development
Exception handling: use bare raise to re-raise; log with logger.error() when re-raising to avoid duplicate stack traces; use logger.exception() when catching without re-raising
Provide Google-style docstrings for every public module, class, function, and CLI command; first line concise and ending with a period; surround code entities with backticks
Validate and sanitize all user input, especially in web or CLI interfaces
Prefer httpx with SSL verification enabled by default and follow OWASP Top-10 recommendations
Use async/await for I/O-bound work; profile CPU-heavy paths with cProfile or mprof before optimizing; cache expensive computations with functools.lru_cache or external cache; leverage NumPy vectorized operations when beneficial

Files:

  • packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py
  • src/nat/llm/register.py
  • src/nat/llm/litellm_llm.py
packages/*/src/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Importable Python code inside packages must live under packages//src/

Files:

  • packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py
{src/**/*.py,packages/*/src/**/*.py}

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

All public APIs must have Python 3.11+ type hints on parameters and return values; prefer typing/collections.abc abstractions; use typing.Annotated when useful

Files:

  • packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py
  • src/nat/llm/register.py
  • src/nat/llm/litellm_llm.py
**/*

⚙️ CodeRabbit configuration file

**/*: # Code Review Instructions

  • Ensure the code follows best practices and coding standards. - For Python code, follow
    PEP 20 and
    PEP 8 for style guidelines.
  • Check for security vulnerabilities and potential issues. - Python methods should use type hints for all parameters and return values.
    Example:
    def my_function(param1: int, param2: str) -> bool:
        pass
  • For Python exception handling, ensure proper stack trace preservation:
    • When re-raising exceptions: use bare raise statements to maintain the original stack trace,
      and use logger.error() (not logger.exception()) to avoid duplicate stack trace output.
    • When catching and logging exceptions without re-raising: always use logger.exception()
      to capture the full stack trace information.

Documentation Review Instructions - Verify that documentation and comments are clear and comprehensive. - Verify that the documentation doesn't contain any TODOs, FIXMEs or placeholder text like "lorem ipsum". - Verify that the documentation doesn't contain any offensive or outdated terms. - Verify that documentation and comments are free of spelling mistakes, ensure the documentation doesn't contain any

words listed in the ci/vale/styles/config/vocabularies/nat/reject.txt file, words that might appear to be
spelling mistakes but are listed in the ci/vale/styles/config/vocabularies/nat/accept.txt file are OK.

Misc. - All code (except .mdc files that contain Cursor rules) should be licensed under the Apache License 2.0,

and should contain an Apache License 2.0 header comment at the top of each file.

  • Confirm that copyright years are up-to date whenever a file is changed.

Files:

  • packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py
  • src/nat/llm/register.py
  • src/nat/llm/litellm_llm.py
packages/**/*

⚙️ CodeRabbit configuration file

packages/**/*: - This directory contains optional plugin packages for the toolkit, each should contain a pyproject.toml file. - The pyproject.toml file should declare a dependency on nvidia-nat or another package with a name starting
with nvidia-nat-. This dependency should be declared using ~=<version>, and the version should be a two
digit version (ex: ~=1.0).

  • Not all packages contain Python code, if they do they should also contain their own set of tests, in a
    tests/ directory at the same level as the pyproject.toml file.

Files:

  • packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py
src/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

All importable Python code must live under src/ (or packages//src/)

Files:

  • src/nat/llm/register.py
  • src/nat/llm/litellm_llm.py
src/nat/**/*

📄 CodeRabbit inference engine (.cursor/rules/general.mdc)

Changes in src/nat should prioritize backward compatibility

Files:

  • src/nat/llm/register.py
  • src/nat/llm/litellm_llm.py

⚙️ CodeRabbit configuration file

This directory contains the core functionality of the toolkit. Changes should prioritize backward compatibility.

Files:

  • src/nat/llm/register.py
  • src/nat/llm/litellm_llm.py
🧬 Code graph analysis (2)
packages/nvidia_nat_adk/src/nat/plugins/adk/llm.py (3)
src/nat/llm/litellm_llm.py (1)
  • LiteLlmModelConfig (32-52)
src/nat/builder/framework_enum.py (1)
  • LLMFrameworkEnum (19-25)
src/nat/builder/builder.py (1)
  • Builder (68-290)
src/nat/llm/litellm_llm.py (4)
src/nat/builder/builder.py (1)
  • Builder (68-290)
src/nat/data_models/temperature_mixin.py (1)
  • TemperatureMixin (25-44)
src/nat/data_models/thinking_mixin.py (1)
  • ThinkingMixin (29-86)
src/nat/data_models/top_p_mixin.py (1)
  • TopPMixin (25-44)
🔇 Additional comments (1)
src/nat/llm/register.py (1)

23-26: Restored LiteLLM registration looks good.

Re-introducing litellm_llm here reactivates its side-effect registration and keeps the provider discoverable alongside the others.

@willkill07
Copy link
Member Author

/merge

@rapids-bot rapids-bot bot merged commit 74a2ca8 into NVIDIA:develop Sep 25, 2025
27 of 28 checks passed
@willkill07 willkill07 deleted the wkk_restore-litellm branch October 23, 2025 18:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

external This issue was filed by someone outside of the NeMo Agent toolkit team feature request New feature or request non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants