Bug in Installiing open WebUI

I followed the instructions for install Open WebUI and was able to create a Docker Container and process for Open WebUI which I could browse witha ps -ef|grep “open” but when I attempted to open the application in my Chrome Browser on the PC I got “This site can’t be reached”

I was however able to browse localhost:11000 Is this something with my firewall on Norton? or something else I can correct. Many Thanks – Ira Laefsky

Which DGX system? What instructions?

Need more information @ira2laefsky :-)

ScottE

I’m running on a new NVIDIA SPARK The Instructions are those for the spark on running Open WebUI – When I run netstat -ano I get laefsky@spark-295e:~$ netstat -ano|less
laefsky@spark-295e:~$ netstat -ano|grep “8080”
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN off (0.00/0/0)
tcp6 0 0 :::8080 :::* LISTEN off (0.00/0/0)
laefsky@spark-295e:~$

https://build.nvidia.com/spark/open-webui These are the instructions I’ve been following for installing OpenWEBUI on my DGX SPARK

This is the last thing I copied to the terminal:

Step 3Start the Open WebUI container

Start the Open WebUI container by running:

docker run -d -p 8080:8080 --gpus=all \
  -v open-webui:/app/backend/data \
  -v open-webui-ollama:/root/.ollama \
  --name open-webui [ghcr.io/open-webui/open-webui:ollama](http://ghcr.io/open-webui/open-webui:ollama)

This will start the Open WebUI container and make it accessible at [http://localhost:8080](http://localhost:8080). You can access the Open WebUI interface from your local web browser.

Application data will be stored in the open-webui volume and model data will be stored in the open-webui-ollama volume.

Ah, got it @ira2laefsky . Let me move this over to the DGX Spark forum so it gets more eyes to help out.

ScottE

Could you please provide me with a link to my query on the dgx forum and also a url for the forum? – Ira

Hi,

Are you trying to access the web UI locally on the Spark? You can access the DGX Dashboard remotely due to SSH tunneling through the NV Sync app but this does not apply to other locally hosted ports and you will need to open them publicly yourself

I am trying to install and access Open WebUI from my headless DGX Spark from my Windows PC via Nvidia sync I have been unable to install and access the Open WebUI on port 8080 from my PC using the cmd terminal shell ssh’ed from my PC to the Spark – Ira

You will need to follow the steps “Setup Open WebUI on Remote Spark with NVIDIA Sync” instructions in the playbook. The instructions are different than setting it up for local access

If you are running headless they have a separate set of instructions in the Open WebUI with Ollama page: Open WebUI with Ollama / Setup Open WebUI on Remote Spark with NVIDIA Sync

If you’re connected to your Spark through NVIDIA Sync, it maintains its own active user session. After running a usermod -aG command followed by newgrp, you’ll need to disconnect your Sync session before continuing with the tutorial commands. As the example NVIDIA Sync example has you update your bash session with the newgrp but not the Sync session so it can’t run the docker command until you start a new session.

I’ve been following the headless configuration as follows:
Step 4## Add Open WebUI custom port configuration

A Custom port is used to automatically start the Open WebUI container and set up port forwarding.

Click the “Add New” button on the Custom tab.

Fill out the form with these values:

  • Name: Open WebUI
  • Port: 12000
  • Auto open in browser at the following path: Check this checkbox
  • Start Script: Copy and paste this entire script:
#!/usr/bin/env bash
set -euo pipefail

NAME="open-webui"
IMAGE="[ghcr.io/open-webui/open-webui:ollama](http://ghcr.io/open-webui/open-webui:ollama)"

cleanup() {
  echo "Signal received; stopping ${NAME}..."
  docker stop "${NAME}" >/dev/null 2>&1 || true
  exit 0
}
trap cleanup INT TERM HUP QUIT EXIT

# Ensure Docker CLI and daemon are available
if ! docker info >/dev/null 2>&1; then
  echo "Error: Docker daemon not reachable." >&2
  exit 1
fi

# Already running?
if [ -n "$(docker ps -q --filter "name=^${NAME}$" --filter "status=running")" ]; then
  echo "Container ${NAME} is already running."
else
  # Exists but stopped? Start it.
  if [ -n "$(docker ps -aq --filter "name=^${NAME}$")" ]; then
    echo "Starting existing container ${NAME}..."
    docker start "${NAME}" >/dev/null
  else
    # Not present: create and start it.
    echo "Creating and starting ${NAME}..."
    docker run -d -p 12000:8080 --gpus=all \
      -v open-webui:/app/backend/data \
      -v open-webui-ollama:/root/.ollama \
      --name "${NAME}" "${IMAGE}" >/dev/null
  fi
fi

echo "Running. Press Ctrl+C to stop ${NAME}."
# Keep the script alive until a signal arrives
while :; do sleep 86400; done

  • Click the “Add” button to save configuration to your DGX Spark.

Step 5Launch Open WebUI

Click on the NVIDIA Sync icon in your system tray or taskbar to open the main application window.

Under the “Custom” section, click on “Open WebUI”.

Your default web browser should automatically open to the Open WebUI interface at [http://localhost:12000](http://localhost:12000).

TIP

On first run, Open WebUI downloads models. This can delay server start and cause the page to fail to load in your browser. Simply wait and refresh the page. On future

This is the response I got on trying step 5

This site can’t be reached

The connection was reset.

Try:

  • Checking the connection
  • Checking the proxy and the firewall
  • Running Windows Network Diagnostics

Error code: ERR_CONNECTION_RESET’ve been following these instructions to install

did you validate it’s running in docker?

The script you posted has Markdown link text for IMAGE instead of just →

IMAGE=”ghcr.io/open-webui/open-webui:ollama”

Change this first and see if it fixes your issue, otherwise try these:

docker ps

if it’s running you should check to make sure it shows:

PORTS
0.0.0.0:12000->8080/tcp

If it’s not running see if it’s stopped:

docker ps -a

If it’s stopped you can check logs:

docker logs open-webui

But if all else fails I would recommend running this file via ssh and see what happens

I did a docker ps and got : laefsky@spark-295e:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d61bcf92c54d ghcr.io/open-webui/open-webui:ollama “bash start.sh” 30 hours ago Up 13 hours (healthy) 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp open-webui

I also ran docker webui logs and got :
laefsky@spark-295e:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d61bcf92c54d ghcr.io/open-webui/open-webui:ollama “bash start.sh” 30 hours ago Up 13 hours (healthy) 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp open-webui
laefsky@spark-295e:~$ docker logs open-webui
Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.
Generating WEBUI_SECRET_KEY
Loading WEBUI_SECRET_KEY from .webui_secret_key
USE_OLLAMA is set to true, starting ollama serve.
time=2025-10-21T08:37:55.760Z level=INFO source=routes.go:1481 msg=“server config” env=“map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]”
time=2025-10-21T08:37:55.760Z level=INFO source=images.go:522 msg=“total blobs: 0”
time=2025-10-21T08:37:55.760Z level=INFO source=images.go:529 msg=“total unused blobs removed: 0”
time=2025-10-21T08:37:55.761Z level=INFO source=routes.go:1534 msg=“Listening on 127.0.0.1:11434 (version 0.12.5)”
time=2025-10-21T08:37:55.761Z level=INFO source=runner.go:80 msg=“discovering available GPUs…”
time=2025-10-21T08:37:56.748Z level=INFO source=types.go:112 msg=“inference compute” id=GPU-f2e96ddd-fbe9-c87b-ebda-d20e0e68266d library=CUDA compute=12.1 name=CUDA0 description=“NVIDIA GB10” libdirs=ollama,cuda_v13 driver=13.0 pci_id=01:00.f type=iGPU total=“119.7 GiB” available=“115.9 GiB”
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
WARNI [open_webui.env]

WARNING: CORS_ALLOW_ORIGIN IS SET TO ‘*’ - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS.

INFO [open_webui.env] VECTOR_DB: chroma
INFO [open_webui.env] Embedding model set: sentence-transformers/all-MiniLM-L6-v2
WARNI [langchain_community.utils.user_agent] USER_AGENT environment variable not set, consider setting it to identify your requests.

██████╗ ██████╗ ███████╗███╗ ██╗ ██╗ ██╗███████╗██████╗ ██╗ ██╗██╗
██╔═══██╗██╔══██╗██╔════╝████╗ ██║ ██║ ██║██╔════╝██╔══██╗██║ ██║██║
██║ ██║██████╔╝█████╗ ██╔██╗ ██║ ██║ █╗ ██║█████╗ ██████╔╝██║ ██║██║
██║ ██║██╔═══╝ ██╔══╝ ██║╚██╗██║ ██║███╗██║██╔══╝ ██╔══██╗██║ ██║██║
╚██████╔╝██║ ███████╗██║ ╚████║ ╚███╔███╔╝███████╗██████╔╝╚██████╔╝██║
╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝ ╚══╝╚══╝ ╚══════╝╚═════╝ ╚═════╝ ╚═╝

v0.6.34 - building the best AI user interface.

https://github.com/open-webui/open-webui

Fetching 30 files: 100%|██████████| 30/30 [00:00<00:00, 43389.35it/s]
INFO: Started server process [1]
INFO: Waiting for application startup.
2025-10-21 08:38:02.782 | INFO | open_webui.utils.logger:start_logger:162 - GLOBAL_LOG_LEVEL: INFO
2025-10-21 08:38:02.782 | INFO | open_webui.main:lifespan:561 - Installing external dependencies of functions and tools…
2025-10-21 08:38:02.787 | INFO | open_webui.utils.plugin:install_frontmatter_requirements:283 - No requirements found in frontmatter.
Loading WEBUI_SECRET_KEY from file, not provided as an environment variable.
Loading WEBUI_SECRET_KEY from .webui_secret_key
USE_OLLAMA is set to true, starting ollama serve.
time=2025-10-22T01:15:20.922Z level=INFO source=routes.go:1481 msg=“server config” env=“map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_REMOTES:[ollama.com] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]”
time=2025-10-22T01:15:20.922Z level=INFO source=images.go:522 msg=“total blobs: 0”
time=2025-10-22T01:15:20.922Z level=INFO source=images.go:529 msg=“total unused blobs removed: 0”
time=2025-10-22T01:15:20.922Z level=INFO source=routes.go:1534 msg=“Listening on 127.0.0.1:11434 (version 0.12.5)”
time=2025-10-22T01:15:20.923Z level=INFO source=runner.go:80 msg=“discovering available GPUs…”
time=2025-10-22T01:15:21.990Z level=INFO source=types.go:112 msg=“inference compute” id=GPU-f2e96ddd-fbe9-c87b-ebda-d20e0e68266d library=CUDA compute=12.1 name=CUDA0 description=“NVIDIA GB10” libdirs=ollama,cuda_v13 driver=13.0 pci_id=01:00.f type=iGPU total=“119.7 GiB” available=“115.6 GiB”
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
WARNI [open_webui.env]

WARNING: CORS_ALLOW_ORIGIN IS SET TO ‘*’ - NOT RECOMMENDED FOR PRODUCTION DEPLOYMENTS.

INFO [open_webui.env] VECTOR_DB: chroma
INFO [open_webui.env] Embedding model set: sentence-transformers/all-MiniLM-L6-v2
WARNI [langchain_community.utils.user_agent] USER_AGENT environment variable not set, consider setting it to identify your requests.

██████╗ ██████╗ ███████╗███╗ ██╗ ██╗ ██╗███████╗██████╗ ██╗ ██╗██╗
██╔═══██╗██╔══██╗██╔════╝████╗ ██║ ██║ ██║██╔════╝██╔══██╗██║ ██║██║
██║ ██║██████╔╝█████╗ ██╔██╗ ██║ ██║ █╗ ██║█████╗ ██████╔╝██║ ██║██║
██║ ██║██╔═══╝ ██╔══╝ ██║╚██╗██║ ██║███╗██║██╔══╝ ██╔══██╗██║ ██║██║
╚██████╔╝██║ ███████╗██║ ╚████║ ╚███╔███╔╝███████╗██████╔╝╚██████╔╝██║
╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝ ╚══╝╚══╝ ╚══════╝╚═════╝ ╚═════╝ ╚═╝

v0.6.34 - building the best AI user interface.

https://github.com/open-webui/open-webui

Fetching 30 files: 100%|██████████| 30/30 [00:00<00:00, 83886.08it/s]
INFO: Started server process [1]
INFO: Waiting for application startup.
2025-10-22 01:15:24.690 | INFO | open_webui.utils.logger:start_logger:162 - GLOBAL_LOG_LEVEL: INFO
2025-10-22 01:15:24.690 | INFO | open_webui.main:lifespan:561 - Installing external dependencies of functions and tools…
2025-10-22 01:15:24.694 | INFO | open_webui.utils.plugin:install_frontmatter_requirements:283 - No requirements found in frontmatter.

According to your docker ps output, you are not forwarding the correct port. Please make sure you opened port 12000, not 8080, through NVIDIA Sync

Sorry for being a newbie to this and a little bit thick. What explicitly should I do now>? When I opened localhost:12000 in my browser I got :# This site can’t be reached

localhost refused to connect.

Try:

  • Checking the connection
  • Checking the proxy and the firewall

ERR_CONNECTION_REFUSED

So you will need to redo steps 3 and 4 here which means configuring NV Sync to access that port.

When you add the custom tool, make sure it looks like this

Afterwards you should be able to see the tool in the Sync panel which you can click on and it should access the Open WebUI tool

This is how I edited the Custom Application Open WebUI based on my (mis)understanding of your advice: #!/usr/bin/env bash
set -euo pipefail

NAME=“open-webui”
IMAGE=“ghcr.io/open-webui/open-webui:ollama

cleanup() {
echo “Signal received; stopping ${NAME}…”
docker stop “${NAME}” >/dev/null 2>&1 || true
exit 0
}
trap cleanup INT TERM HUP QUIT EXIT

Ensure Docker CLI and daemon are available

if ! docker info >/dev/null 2>&1; then
echo “Error: Docker daemon not reachable.” >&2
exit 1
fi

Already running?

if [ -n “$(docker ps -q --filter “name=^${NAME}$” --filter “status=running”)” ]; then
echo “Container ${NAME} is already running.”
else

Exists but stopped? Start it.

if [ -n “$(docker ps -aq --filter “name=^${NAME}$”)” ]; then
echo “Starting existing container ${NAME}…”
docker start “${NAME}” >/dev/null
else

Not present: create and start it.

echo “Creating and starting ${NAME}…”

docker run -d -p 12000:8080 --gpus=all \

docker run -d -p 12000:12000 --gpus -all
-v open-webui:/app/backend/data
-v open-webui-ollama:/root/.ollama
–name “${NAME}” “${IMAGE}” >/dev/null
fi
fi

echo “Running. Press Ctrl+C to stop ${NAME}.”

Keep the script alive until a signal arrives

while :; do sleep 86400; done

How should I correct this script in particular these lines:

docker run -d -p 12000:8080 --gpus=all \

docker run -d -p 12000:12000 --gpus -all \

When it currently launches localhost:12000 in my chrome browser I get the following reply

This site can’t be reached

The connection was reset.

Try:

  • Checking the connection
  • Checking the proxy and the firewall
  • Running Windows Network Diagnostics

Error code: ERR_CONNECTION_RESET

If possible could you give me a complete corrected script for the custom application “Open webui” so I can exactly cut and paste it – Thank you – Ira laefsky

The script should be the same as the one on the playbook, you do not need to modify it. Make sure you have no other open-webui containers running that you may have started via command line.

docker stop <container_id>

You need to make sure your docker ps output looks like:
8d6139b8ad7f ghcr.io/open-webui/open-webui:ollama “bash start.sh” 5 hours ago Up 34 seconds (healthy) 0.0.0.0:12000->8080/tcp, [::]:12000->8080/tcp open-webui

where port 12000 is being tunneled to 8080 then the setup is