Jetson Thor: Containers does not see GPU

I am following the instructions on this page https://docs.nvidia.com/jetson/agx-thor-devkit/user-guide/latest/setup_cuda.html and https://docs.nvidia.com/jetson/agx-thor-devkit/user-guide/latest/setup_docker.html#example-1-run-pytorch-container

Not able the detect GPU on Thor. Output below.

Hi,

Which container do you use?
Our latest PyTorch container for Thor is nvcr.io/nvidia/pytorch:25.10-py3.

Thanks.

I am using container version nvcr.io/nvidia/pytorch:25.08-py3
The issue is it does not detect GPU. I have tried the same in 2 Thor setups purchased from Arrow. Both shows same issue.

I am getting similar issue when I ran CUDA container. The GPU is not being detected.

This is as part of install instrction posted in https://docs.nvidia.com/jetson/agx-thor-devkit/user-guide/latest/setup_docker.html#example-1-run-pytorch-containerhttps://docs.nvidia.com/jetson/agx-thor-devkit/user-guide/latest/setup_docker.html#example-1-run-pytorch-container

Hi,

The error shows an insufficient CUDA driver so this might relate to the setup.

How do you set up your system? Do you use JetPack 7.0?
Could you run nvidia-smi outside of the container and share the output with us?

Thanks.

Yes, I used Jetpack 7.0.
nvidia-smi and nvidia-smi -q output are attached.

output.txt (10.8 KB)

Hi,

We test the command/container mentioned in the doc and it can work as expected in our environment.

$ docker run --rm -it \
    -v "$PWD":/workspace \
    -w /workspace \
    nvcr.io/nvidia/pytorch:25.08-py3
# python3 <<'EOF'
import torch
print("PyTorch version:", torch.__version__)
print("CUDA available:", torch.cuda.is_available())
if torch.cuda.is_available():
    print("GPU name:", torch.cuda.get_device_name(0))
    x = torch.rand(10000, 10000, device="cuda")
    print("Tensor sum:", x.sum().item())
EOF
PyTorch version: 2.8.0a0+34c6371d24.nv25.08
CUDA available: True
GPU name: NVIDIA Thor
Tensor sum: 49997884.0

Could you try it again?
Or could you try to setup the environment with below steps again?

Thanks.