Julia CUDA on DGX Spark

I’m not sure if this is the right forum for this; but after running through the PyTorch tutorial I wanted to compare with Julia.

@sparky:~$ sudo snap install julia-dev --classic
@sparky:~$ julia
julia> import Pkg; Pkg.add("CUDA")
julia> using CUDA
julia> CUDA.versioninfo()
CUDA toolchain: 
- runtime 13.0, artifact installation
- driver 580.95.5 for 13.0
- compiler 13.0

CUDA libraries: 
- CUBLAS: 13.1.0
- CURAND: 10.4.0
- CUFFT: 12.0.0
- CUSOLVER: 12.0.4
- CUSPARSE: 12.6.3
- CUPTI: 2025.3.1 (API 13.0.1)
- NVML: 13.0.0+580.95.5

Julia packages: 
- CUDA: 5.9.2
- CUDA_Driver_jll: 13.0.2+0
- CUDA_Compiler_jll: 0.3.0+0
- CUDA_Runtime_jll: 0.19.2+0

Toolchain:
- Julia: 1.12.0
- LLVM: 18.1.7

1 device:
  0: NVIDIA GB10 (sm_121, 109.722 GiB / 119.699 GiB available)

julia>

Then, at that Julia>prompt, you can hit ] key to get a package manager prompt

(@v1.12) pkg> test CUDA

The tests mostly pass, but there are some random failures.

I was following along here: Introduction · CUDA.jl

The errors are things like

CUDA error: limit is not supported on this architecture (code 215, ERROR_UNSUPPORTED_LIMIT)

and

 Unsupported Function 'cudaDeviceSynchronize' on arch 'sm_90' or higher

so far the rest of the examples on that page are working, so maybe the errors can be ignored, but I was curios if there were any DGX specific instructions for Julia.

Hi
I’m not very familiar with Julia but the GB10 chip is sm_121 so any tools that use older functions, like cudaDeviceSynchronize will not work

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.