-
Notifications
You must be signed in to change notification settings - Fork 487
Closed
Labels
docImprovements or additions to documentationImprovements or additions to documentation
Description
I'm trying to use a locally running OpenAI API-compatible LLM (using vLLM running locally in a docker container). Is this supported, or is there only support for using the official openai API with an OpenAI API key? If using a local LLM is supported, what should be used in the configuration?
Here's the config I am trying to use:
llms:
local_llm:
_type: openai
api_base: "http://192.168.5.173:8000/v1"
model: "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B"
temperature: 0.7
max_tokens: 1024
api_key: "sk-abc123"
I'm running my local LLM with this command:
docker run \
--runtime nvidia \
--gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--api-key sk-abc123 \
--model deepseek-ai/DeepSeek-R1-Distill-Qwen-7B \
--gpu-memory-utilization 0.95 \
--max-model-len 65536
Here's the error I'm getting:
(.venv) brian@a2:~/git/agentiq$ aiq run --config_file=config.yml --input "What is the capital of France?"
...
Workflow Result:
["Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-abc123. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}"]
Metadata
Metadata
Assignees
Labels
docImprovements or additions to documentationImprovements or additions to documentation