-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [y ] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [y] I carefully followed the README.md.
- [y] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [y] I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
I run command as follow to start server
python -m llama_cpp.server --model ../llama.cpp/models/airoboros-7b-gpt4/airoboros-7b-gpt4-1.4.ggmlv3.q4_0.bin --model_alias gpt-4
I can run it if I don't set --model_alias parameter,but if I set --model_alias as above, there will be an error as follow.
Current Behavior
The execute result is follow:
/Users/hongyin/anaconda3/envs/llmcpp/lib/python3.11/site-packages/pydantic/_internal/_fields.py:126: UserWarning: Field "model_alias" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ('settings_',)`.
warnings.warn(
usage: __main__.py [-h] [--model MODEL] [--model_alias MODEL_ALIAS] [--n_ctx N_CTX] [--n_gpu_layers N_GPU_LAYERS]
[--tensor_split TENSOR_SPLIT] [--rope_freq_base ROPE_FREQ_BASE] [--rope_freq_scale ROPE_FREQ_SCALE]
[--seed SEED] [--n_batch N_BATCH] [--n_threads N_THREADS] [--f16_kv F16_KV] [--use_mlock USE_MLOCK]
[--use_mmap USE_MMAP] [--embedding EMBEDDING] [--low_vram LOW_VRAM]
[--last_n_tokens_size LAST_N_TOKENS_SIZE] [--logits_all LOGITS_ALL] [--cache CACHE]
[--cache_type CACHE_TYPE] [--cache_size CACHE_SIZE] [--vocab_only VOCAB_ONLY] [--verbose VERBOSE]
[--host HOST] [--port PORT] [--interrupt_requests INTERRUPT_REQUESTS]
__main__.py: error: argument --model_alias: invalid Optional value: 'gpt-4'
(llmcpp) hongyin@honyindeMacBook-Pro pyai % python --version
Python 3.11.4
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
- Physical (or virtual) hardware you are using, e.g. for Linux:
$ lscpu
- Operating System, e.g. for Linux:
$ uname -a
Darwin honyindeMacBook-Pro.local 22.3.0 Darwin Kernel Version 22.3.0: Mon Jan 30 20:39:46 PST 2023; root:xnu-8792.81.3~2/RELEASE_ARM64_T6020 arm64
- SDK version, e.g. for Linux:
$ python3 --version 3.11.4
$ make --version 3.81
$ g++ --version 14.0.0
Failure Information (for bugs)
/Users/hongyin/anaconda3/envs/llmcpp/lib/python3.11/site-packages/pydantic/_internal/_fields.py:126: UserWarning: Field "model_alias" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ('settings_',)`.
warnings.warn(
usage: __main__.py [-h] [--model MODEL] [--model_alias MODEL_ALIAS] [--n_ctx N_CTX] [--n_gpu_layers N_GPU_LAYERS]
[--tensor_split TENSOR_SPLIT] [--rope_freq_base ROPE_FREQ_BASE] [--rope_freq_scale ROPE_FREQ_SCALE]
[--seed SEED] [--n_batch N_BATCH] [--n_threads N_THREADS] [--f16_kv F16_KV] [--use_mlock USE_MLOCK]
[--use_mmap USE_MMAP] [--embedding EMBEDDING] [--low_vram LOW_VRAM]
[--last_n_tokens_size LAST_N_TOKENS_SIZE] [--logits_all LOGITS_ALL] [--cache CACHE]
[--cache_type CACHE_TYPE] [--cache_size CACHE_SIZE] [--vocab_only VOCAB_ONLY] [--verbose VERBOSE]
[--host HOST] [--port PORT] [--interrupt_requests INTERRUPT_REQUESTS]
__main__.py: error: argument --model_alias: invalid Optional value: 'gpt-4'
(llmcpp) hongyin@honyindeMacBook-Pro pyai % python --version
Python 3.11.4
Steps to Reproduce
- run command
python -m llama_cpp.server --model models/airoboros-7b-gpt4/airoboros-7b-gpt4-1.4.ggmlv3.q4_0.bin --model_alias gpt-4
Failure Logs
llama-cpp-python$ git log | head -1
commit a4fe3fe3502fa6d73575f77220b8694a420c7ebd
llama-cpp-python$ python3 --version
Python 3.11.4
llama-cpp-python$ pip list | egrep "uvicorn|fastapi|sse-starlette|numpy"
uvicorn=0.23.1
anyio=3.7.1
starlette=0.30.0
fastapi=0.100.0
pydantic_settings=2.0.2
sse_starlette=1.6.1
I try to debug the code to solve this problem, if doesn't set type of function add_argument, that will be ok.
import os
import argparse
import uvicorn
from llama_cpp.server.app import create_app, Settings
if __name__ == "__main__":
parser = argparse.ArgumentParser()
for name, field in Settings.model_fields.items():
description = field.description
if field.default is not None and description is not None:
description += f" (default: {field.default})"
print(name)
print(field.annotation)
parser.add_argument(
f"--{name}",
dest=name,
type=field.annotation if field.annotation is not None else str, # If I comment out this line of code, it is working fine
help=description,
)
args = parser.parse_args()
settings = Settings(**{k: v for k, v in vars(args).items() if v is not None})
app = create_app(settings=settings)
uvicorn.run(
app, host=os.getenv("HOST", settings.host), port=int(os.getenv("PORT", settings.port))
)
beautyfree
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working