Torch not compiled with CUDA enabled

Hi everybody,

I have tried to follow multiple different guides, but neither have worked for me. It is the same issue as described here, but the solution does not work for me.

Likely because I have different hardware and/or there is newer software available at the moment of posting this. However, I could not figure out what other packages to install, or what else to change in order to make this work.

jetson_release

Software part of jetson-stats 4.3.2 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Jetson AGX Orin Developer Kit - Jetpack 6.2 [L4T 36.4.3]
NV Power Mode[0]: MAXN
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3701-0005
 - Module: NVIDIA Jetson AGX Orin (64GB ram)
Platform:
 - Distribution: Ubuntu 22.04 Jammy Jellyfish
 - Release: 5.15.148-tegra
jtop:
 - Version: 4.3.2
 - Service: Active
Libraries:
 - CUDA: 12.8.93
 - cuDNN: 9.3.0.75
 - TensorRT: 10.3.0.30
 - VPI: 3.2.4
 - Vulkan: 1.3.204
 - OpenCV: 4.8.0 - with CUDA: NO

Without any venv or miniconda environment:

  • python
  • import torch
  • print(torch.cuda.is_available())
  • ==> False

I have just installed Jetpack 6.2 via sdk, and then also installed all packages available via sdk. There was no particular option to specify how things are compiled (for example, I could not choose to build OpenCV: 4.8.0 with CUDA).

I have tried to manually install pytorch via wheel, but I must have gotten an incorrect version (or something else was wrong).

Tried following

  • this (does not work; it fails after I specify export CUDA_VERSION=12.8 (or 12.8.93)
  • this only lists the wheels relevant to up to CUDA 12.4

I can run jetson-containers run $(autotag pytorch), then test for torch.cuda.is_available(), and while it takes a bit of time after import torch, it will eventually load and return True.

Is there perhaps some way to tell my system to always utilize that pytorch (from within that jetson-containers) when it needs? I don’t mean to specifically build a particular (jetson-)container that includes this, but that my main OS uses this containerized pytorch instead of a locally installed version…

And if that is not possible, what else would you recommend? I have some issues with the stable-diffusion-webui jetson-container, but I cannot run it not-dockerized without torch being compiled with CUDA enabled. I would like to try stable-diffusion-webui-forge, which isn’t even available via jetson-containers, but it won’t run with my current torch setup, either.

Anybody else have this issue? And perhaps even a solution I could try? Thank you in advance :)

Hello @bnstr

I think you can try the wheels from jp6/cu126 index. I got that link when I checked this post Request pytorch for jetpack5.1.3 - #2 by 21271139.

I hope this helps!
Regards!
Eduardo Salazar
Embedded SW Engineer at RidgeRun

Contact us: [email protected]
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com

1 Like

Thank you so much, Eduardo! This worked perfectly.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.