0% found this document useful (0 votes)
29 views3 pages

AI Builder Docker Linux

The document provides instructions for setting up a virtual environment and using Docker to install and run the Ollama LLM and Open Web UI. It outlines the steps for creating a virtual environment, installing necessary packages, and configuring Docker services for both Ollama and Open Web UI. Additionally, it includes commands for pulling specific models from Ollama and accessing them within the Docker container.

Uploaded by

TC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views3 pages

AI Builder Docker Linux

The document provides instructions for setting up a virtual environment and using Docker to install and run the Ollama LLM and Open Web UI. It outlines the steps for creating a virtual environment, installing necessary packages, and configuring Docker services for both Ollama and Open Web UI. Additionally, it includes commands for pulling specific models from Ollama and accessing them within the Docker container.

Uploaded by

TC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd

AI build

1. Use a Virtual Environment


Creating a virtual environment is the most common and recommended
approach. This keeps your installed packages isolated from the system
Python. For example:
1. Create a Virtual Environment:
python3 -m venv ~/open-webui-venv

2. Activate the Virtual Environment:


source ~/open-webui-venv/bin/activate

3. Install open-webui Within the Virtual Environment:


pip install open-webui

Docker
sudo apt install -y glances vim docker.io

Docker compose
version: "3.8"

services:
ollama:
# Replace this with the correct image name for your Ollama LLM
service.
image: ollama/ollama:latest
container_name: ollama
ports:
- "8000:8000" # Exposing the LLM API port (adjust if needed)
environment:
# Add any required environment variables for configuring your
model(s)
- MODEL_CONFIG=your_model_configuration_here
# Optional: If you need persistent storage for models or
configurations,
# add a volumes section:
# volumes:
# - ollama_data:/path/inside/container

openwebui:
# Replace this with the correct image name for your open-webui.
image: open-webui/open-webui:latest
container_name: open-webui
ports:
- "11434:11434" # Port where you access the web UI
depends_on:
- ollama
environment:
# Point the UI to the LLM API endpoint provided by the Ollama
container.
- LLM_API_URL=http://ollama:8000
# Optional: If you have configuration files or data to persist:
# volumes:
# - openwebui_config:/path/inside/container

# Optional: Define volumes if needed


# volumes:
# ollama_data:
# openwebui_config:

Installing ollama via docker


docker run -d \
--name ollama \
-p 11434:11434 \
-v ollama_volume:/root/.ollama \
ollama/ollama:latest

Move into ollama container to download


models
docker exec -it ollama bash

Models
https://ollama.com/search

ollama pull deepseek-r1:8b


⁃ Deepseek-r1:8b works fine.

ollama pull deepseek-r1:14b

ollama pull llama3.2

ollama pull llama2:13b

You might also like