-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
Description
A fair number of tools we use in cudaPackages depend on specific versions of CUDA. Sometimes this is a strict dependency due to API changes or other breakages; other times, it's merely because it is not known whether the code is forward (or backward!) compatible.
The approach that I (@ConnorBaker) have taken so far in cudaPackages has been that when a package is unavailable for some release of CUDA, or is known to be incompatible, it is not present in that CUDA release's package set (e.g., cudaPackages_12_2).
However, this limits the flexibility of downstream users and avoids a larger pattern: attempting to use newer or older versions of dependencies.
@SomeoneSerge raised this issue in a comment on a PR about cuda-samples, which we use as a sort of sanity-check, allowing us to make sure that the cudatoolkit builds: #266115 (comment)
To clarify, it's not "break" in the sense that the derivation has broken = true -- I'm replacing builtins.throw which would "break" evaluation entirely.
Right, we definitely do not want to
throw. Butbroken = trueis still an alternative tooptionalAttrs, what do you think about those?
For context: cudaPackages.cuda-samples has a release for each recent release of CUDA except for 11.7. I made the decision to exclude cuda-samples from cudaPackages_11_7 be wrapping it in optionalAttrs.
This issue proposes two mechanisms in abstract:
- A way to build a variable version of a package against a fixed version of a package set
- e.g., use an older
cuda-samplesrelease (11.6) withcudaPackages_11_7 - e.g., use a newer
cuda-samplesrelease (11.8) withcudaPackages_11_7
- e.g., use an older
- A way to build a fixed version of a package against a variable version of a package set
- e.g., use the same version of TensorRT with different versions of
cudaPackageseven if compatibility is not guaranteed by NVIDIA's support matrix
- e.g., use the same version of TensorRT with different versions of
Taken together, these mechanisms would allow users to attempt to build things which would otherwise not be possible (e.g., using a newer release of CUDNN which might support a specific version of CUDA, even if it is not listed in the support matrix).
It's worth keeping in mind that support matrices (specifically as we're talking about them here, with respect to NVIDIA's software) are essentially guaranteeing that the listed combinations work. It does not guarantee that combinations outside those enumerated will be broken, only that they are unsupported.
@SomeoneSerge @samuela thoughts?
Metadata
Metadata
Assignees
Labels
Projects
Status