Thor GPU can not detected

Thor with Jetpack 7.0, 3 questions about GPU:

  1. run Jetson Power GUI, there isn’t “Load percent“ about GPU. (maybe jtop doesn’t support thor already. but what about Jetson Power GUI?)
  2. run “ollama run deepseek-r1:8b“, 14 CPUs runs in 100%, but can’t get GPU usage. from Ollama - NVIDIA Jetson AI Lab
  3. $lsmod | grep nvgpu. (output nothing, no driver for gpu?)

and use

To enable Thor GPU support inside Docker, please run the container with the following command:

docker run --rm \
-it \
--name ollama \
--runtime nvidia \
-p 11434:11434 \
-v ${HOME}/ollama-data:/data \
ghcr.io/nvidia-ai-iot/ollama:r38.2.arm64-sbsa-cu130-24.04

After that, you can download and run the model with:

ollama run --verbose gpt-oss:20b

Note that the Jetson AI Lab tutorial does not include the --runtime nvidia option in the example command. In our tests, adding this parameter was necessary for the Thor GPU to be recognized correctly.

As shown in the attached screenshot, during execution the GPU utilization reached 97%, confirming that the Thor GPU was fully engaged. The monitoring interface in the image is from Cordatus.

👉 By September 10, you will be able to download and use the Jetson Thor–enabled version of this application.

1 Like

Thanks for reply, it works now.

–runtime nvidia

is the keypoint.

but GPU of Thor must works in docker?

Hi,

Thor uses the SBSA GPU driver.
So please use nvidia-smi to check the GPU usage:

$ nvidia-smi

The GPU loading of Jetson Power GUI is a known issue.
You can find more description in our rel-38.2 release note:
https://docs.nvidia.com/jetson/archives/r38.2/ReleaseNotes/Jetson_Linux_Release_Notes_r38.2.pdf

5406663
Currently, you can monitor GPU utilization by using nvidia-smi dmon. However, because of design changes, integration of GPU utilization in Jetson Power GUI is still under evaluation.

Thanks.

1 Like

The reason for the GPU not being visible inside Docker is usually that Step 2: Rest of the Docker Setup is skipped.

You need to configure the NVIDIA Container Runtime in /etc/docker/daemon.json.
Here is an example configuration that resolves the issue:

$ cat /etc/docker/daemon.json
{
  "runtimes": {
    "nvidia": {
      "args": [],
      "path": "nvidia-container-runtime"
    }
  },
  "default-runtime": "nvidia"
}

After applying this setup, docker run commands will detect the GPU correctly.

1 Like

Hi,

On JetPack 7.0 GA, Docker and NVIDIA Container Runtime are pre-installed on the system by default.
Thanks

Added nvml and jetpack 7 support to jetson_stats and opened a pull request, though the project looks fairly abandoned at this point so not sure if it will get integrated.

1 Like

Hi,

Thanks a lot for this help!
We will share this information with our internal team.

Even though Docker and NVIDIA Container Runtime are pre-installed on JetPack 7.0 GA, users who flash the Jetson device via USB (instead of using SDK Manager or the flash.sh script) should note that the Docker configuration file (/etc/docker/daemon.json) does not include the "default-runtime": "nvidia" setting by default.

This means that running GPU-enabled containers requires adding --runtime=nvidia to every Docker command — unless the config file is updated manually.

For example, after flashing via USB, the content of /etc/docker/daemon.json looks like this:

To avoid the need to specify --runtime=nvidia each time, users can manually add the "default-runtime": "nvidia" entry:

{
  "runtimes": {
    "nvidia": {
      "args": [],
      "path": "nvidia-container-runtime"
    }
  },
  "default-runtime": "nvidia"
}
2 Likes

I used sdkmanager to flash Thor and it was absent in /etc/docker/daemon.json. Just added it.Thanks

,
  "default-runtime": "nvidia"

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.