Skip to content

Commit f477687

Browse files
YeewahChanyou-n-gYihua Chen
authored
feat(backend): integrate LiteLLM API Backend (#564)
* File structure for supporting litellm * more litellm support * feat: Add CachedAPIBackend class and dynamic API backend retrieval function * fix: update benchmark folder path and add default values for architecture and hyperparameters * feat: add LiteLLMAPIBackend and DeprecBackend ; changed structure of the project ; with bus * fix : deprec_backend * feat: Add LiteLLMAPIBackend class and related features; update configuration and test cases. * feat: Enhance LiteLLMAPIBackend with encoder support and dynamic argument handling;Enhance log Colors * lint * fix lint... * fix: Lint * fix:make auto-lint * fix:test oai * fix:redundant _abckend.py * fix: Optimize LiteLLMAPIBackend on token counting functiona, and clean up unused code;add test on this function * feat: Add LiteLLMSettings class and update model settings usage * fix: Update LiteLLMSettings environment variable prefix and model configurations * fix : gitignore * test: Consolidate and relocate test files for litellm backend and oai * fix : lint * fix: lint * auto lint * lint * LINT * lint * chore: remove deprecated backend configuration comments * refactor: Remove unused functions and imports from deprec.py and llm_utils.py * refactor: Move md5_hash function from deprec.py to llm_utils.py * chore: Remove extra newline and add missing import in deprec.py * lint * refactor: Move md5_hash function to utils module * lint * lint * lint --------- Co-authored-by: Young <[email protected]> Co-authored-by: Yihua Chen <[email protected]>
1 parent 0b0a2dc commit f477687

File tree

18 files changed

+1216
-954
lines changed

18 files changed

+1216
-954
lines changed

.gitignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ celerybeat.pid
111111
*.sage.py
112112

113113
# Environments
114-
.env
114+
.env*
115115
.venv
116116
^env/
117117
venv/
@@ -172,3 +172,4 @@ mlruns/
172172
*.out
173173
*.sh
174174
.aider*
175+
rdagent/app/benchmark/factor/example.json

rdagent/app/benchmark/model/eval.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
from rdagent.components.coder.model_coder.benchmark.eval import ModelImpValEval
1414
from rdagent.components.coder.model_coder.one_shot import ModelCodeWriter
1515

16-
bench_folder = DIRNAME.parent.parent / "components" / "coder" / "model_coder" / "benchmark"
16+
bench_folder = DIRNAME.parent.parent.parent / "components" / "coder" / "model_coder" / "benchmark"
1717
mtl = ModelTaskLoaderJson(str(bench_folder / "model_dict.json"))
1818

1919
task_l = mtl.load()

rdagent/components/loader/task_loader.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,8 @@ def load(self, *argT, **kwargs) -> Sequence[ModelTask]:
7575
formulation=model_data["formulation"],
7676
variables=model_data["variables"],
7777
model_type=model_data["model_type"],
78+
architecture="",
79+
hyperparameters="",
7880
)
7981
model_impl_task_list.append(model_impl_task)
8082
return model_impl_task_list

rdagent/oai/backend/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
from .deprec import DeprecBackend
2+
from .litellm import LiteLLMAPIBackend

rdagent/oai/backend/base.py

Lines changed: 53 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,53 @@
1-
class APIBackend:
2-
"""abstract"""
1+
from abc import ABC, abstractmethod
2+
from typing import Any, Dict, List, Optional, Tuple, Union
3+
4+
5+
class APIBackend(ABC):
6+
"""Abstract base class for LLM API backends"""
7+
8+
@abstractmethod
9+
def build_chat_session(
10+
self, conversation_id: Optional[str] = None, session_system_prompt: Optional[str] = None
11+
) -> Any:
12+
"""Create a new chat session"""
13+
pass
14+
15+
@abstractmethod
16+
def build_messages_and_create_chat_completion(
17+
self,
18+
user_prompt: str,
19+
system_prompt: Optional[str] = None,
20+
former_messages: Optional[List[Any]] = None,
21+
chat_cache_prefix: str = "",
22+
shrink_multiple_break: bool = False,
23+
*args: Any,
24+
**kwargs: Any,
25+
) -> str:
26+
"""Build messages and get chat completion"""
27+
pass
28+
29+
@abstractmethod
30+
def create_embedding(
31+
self, input_content: Union[str, List[str]], *args: Any, **kwargs: Any
32+
) -> Union[List[Any], Any]:
33+
"""Create embeddings for input text"""
34+
pass
35+
36+
@abstractmethod
37+
def build_messages_and_calculate_token(
38+
self,
39+
user_prompt: str,
40+
system_prompt: Optional[str],
41+
former_messages: Optional[List[Dict[str, Any]]] = None,
42+
*,
43+
shrink_multiple_break: bool = False,
44+
) -> int:
45+
"""Build messages and calculate their token count"""
46+
pass
47+
48+
49+
# TODO: seperate cache layer. try to be tranparent.
50+
class CachedAPIBackend(APIBackend):
51+
...
52+
# @abstractmethod
53+
# def none_cache_function ...

0 commit comments

Comments
 (0)