Skip to content

Conversation

@Titus-von-Koeller
Copy link
Collaborator

@github-actions
Copy link

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

matthewdouglas
matthewdouglas previously approved these changes Apr 30, 2025
@Titus-von-Koeller
Copy link
Collaborator Author

Titus-von-Koeller commented May 5, 2025

CASE 1 - missing deps (reproduced by deleting some linked libs and installing torch version with a different packaged cuda major version):

❯ python -c 'from bitsandbytes.cextension import lib; lib.cquantize_blockwise_fp16_nf4()'
WARNING: BNB_CUDA_VERSION=124 environment variable detected; loading libbitsandbytes_cuda124.so.
This can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.
If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=
If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH
For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64

bitsandbytes library load error: libcudart.so.12: cannot open shared object file: No such file or directory
Traceback (most recent call last):
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 328, in <module>
    lib = get_native_library()
          ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 315, in get_native_library
    dll = ct.cdll.LoadLibrary(str(binary_path))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.condax/mamba/envs/bnb/lib/python3.11/ctypes/__init__.py", line 454, in LoadLibrary
    return self._dlltype(name)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.condax/mamba/envs/bnb/lib/python3.11/ctypes/__init__.py", line 376, in __init__
    self._handle = _dlopen(self._name, mode)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libcudart.so.12: cannot open shared object file: No such file or directory
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 294, in __getattr__
    raise RuntimeError(f"{self.formatted_error}\n\nNative code method attempted to access: {name}")

RuntimeError: 🚨 CUDA SETUP ERROR: Missing dependency: libcudart.so.12 🚨

CUDA 12.x runtime libraries were not found in the LD_LIBRARY_PATH.

To fix this, make sure that:
1. You have installed CUDA 12.x toolkit on your system
2. The CUDA runtime libraries are in your LD_LIBRARY_PATH

You can add them with (and persist the change by adding the line to your .bashrc):
   export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/cuda-12.x/lib64

Original error: libcudart.so.12: cannot open shared object file: No such file or directory

🔍 Run this command for detailed diagnostics:
python -m bitsandbytes

If you've tried everything and still have issues:
1. Include ALL version info (operating system, bitsandbytes, pytorch, cuda, python)
2. Describe what you've tried in detail
3. Open an issue with this information:
   https://github.com/bitsandbytes-foundation/bitsandbytes/issues

Native code method attempted to access: cquantize_blockwise_fp16_nf4

@Titus-von-Koeller
Copy link
Collaborator Author

case 2 - custom configured CUDA version (other than PyTorch CUDA version):

❯ python -c 'from bitsandbytes.cextension import lib; lib.cquantize_blockwise_fp16_nf4()'
WARNING: BNB_CUDA_VERSION=125 environment variable detected; loading libbitsandbytes_cuda125.so.
This can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.
If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=
If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH
For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64

bitsandbytes library load error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda125.so
Traceback (most recent call last):
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 335, in <module>
    lib = get_native_library()
          ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 315, in get_native_library
    raise RuntimeError(f"Configured CUDA binary not found at {cuda_binary_path}")
RuntimeError: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda125.so
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 298, in __getattr__
    raise RuntimeError(f"{self.formatted_error}Native code method attempted to access: lib.{name}()")
RuntimeError: 
🚨 CUDA VERSION MISMATCH 🚨
Requested CUDA version:          12.5
Detected PyTorch CUDA version:   11.8
Available pre-compiled versions: 
  - 12.3
  - 12.4

This means:
The version you're trying to use is NOT distributed with this package

Attempted to use bitsandbytes native library functionality but it's not available.

This typically happens when:
1. bitsandbytes doesn't ship with a pre-compiled binary for your CUDA version
2. The library wasn't compiled properly during installation from source

To make bitsandbytes work, the compiled library version MUST exactly match the linked CUDA version.
If your CUDA version doesn't have a pre-compiled binary, you MUST compile from source.

You have two options:
1. COMPILE FROM SOURCE (required if no binary exists):
   https://huggingface.co/docs/bitsandbytes/main/en/installation#cuda-compile
2. Use BNB_CUDA_VERSION to specify a DIFFERENT CUDA version from the detected one, which is installed on your machine and matching an available pre-compiled version listed above

Original error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda125.so

🔍 Run this command for detailed diagnostics:
python -m bitsandbytes

If you've tried everything and still have issues:
1. Include ALL version info (operating system, bitsandbytes, pytorch, cuda, python)
2. Describe what you've tried in detail
3. Open an issue with this information:
   https://github.com/bitsandbytes-foundation/bitsandbytes/issues

Native code method attempted to access: lib.cquantize_blockwise_fp16_nf4()

@Titus-von-Koeller
Copy link
Collaborator Author

case 3 - no BNB CUDA native lib but CUDA detected (through PyTorch):

❯ python -c 'from bitsandbytes.cextension import lib; lib.cquantize_blockwise_fp16_nf4()'
WARNING: BNB_CUDA_VERSION=124 environment variable detected; loading libbitsandbytes_cuda124.so.
This can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.
If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=
If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH
For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64

bitsandbytes library load error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda124.so
Traceback (most recent call last):
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 262, in <module>
    lib = get_native_library()
          ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 242, in get_native_library
    raise RuntimeError(f"Configured CUDA binary not found at {cuda_binary_path}")
RuntimeError: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda124.so
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 225, in __getattr__
    raise RuntimeError(f"{self.formatted_error}Native code method attempted to access: lib.{name}()")
RuntimeError: 
🚨 Forgot to compile the bitsandbytes library? 🚨
1. You're not using the package but checked-out the source code
2. You MUST compile from source

Attempted to use bitsandbytes native library functionality but it's not available.

This typically happens when:
1. bitsandbytes doesn't ship with a pre-compiled binary for your CUDA version
2. The library wasn't compiled properly during installation from source

To make bitsandbytes work, the compiled library version MUST exactly match the linked CUDA version.
If your CUDA version doesn't have a pre-compiled binary, you MUST compile from source.

You have two options:
1. COMPILE FROM SOURCE (required if no binary exists):
   https://huggingface.co/docs/bitsandbytes/main/en/installation#cuda-compile
2. Use BNB_CUDA_VERSION to specify a DIFFERENT CUDA version from the detected one, which is installed on your machine and matching an available pre-compiled version listed above

Original error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda124.so

🔍 Run this command for detailed diagnostics:
python -m bitsandbytes

If you've tried everything and still have issues:
1. Include ALL version info (operating system, bitsandbytes, pytorch, cuda, python)
2. Describe what you've tried in detail
3. Open an issue with this information:
   https://github.com/bitsandbytes-foundation/bitsandbytes/issues

Native code method attempted to access: lib.cquantize_blockwise_fp16_nf4()

@Titus-von-Koeller
Copy link
Collaborator Author

case 4a - no fitting CUDA lib relative to PyTorch-detected CUDA installation:

❯ python -c 'from bitsandbytes.cextension import lib; lib.cquantize_blockwise_fp16_nf4()'
bitsandbytes library load error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda118.so
Traceback (most recent call last):
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 262, in <module>
    lib = get_native_library()
          ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 242, in get_native_library
    raise RuntimeError(f"Configured CUDA binary not found at {cuda_binary_path}")
RuntimeError: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda118.so
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 225, in __getattr__
    raise RuntimeError(f"{self.formatted_error}Native code method attempted to access: lib.{name}()")
RuntimeError: 
🚨 CUDA VERSION MISMATCH 🚨
Requested CUDA version:          11.8
Detected PyTorch CUDA version:   11.8
Available pre-compiled versions: 
  - 12.4

This means:
The version you're trying to use is NOT distributed with this package

Attempted to use bitsandbytes native library functionality but it's not available.

This typically happens when:
1. bitsandbytes doesn't ship with a pre-compiled binary for your CUDA version
2. The library wasn't compiled properly during installation from source

To make bitsandbytes work, the compiled library version MUST exactly match the linked CUDA version.
If your CUDA version doesn't have a pre-compiled binary, you MUST compile from source.

You have two options:
1. COMPILE FROM SOURCE (required if no binary exists):
   https://huggingface.co/docs/bitsandbytes/main/en/installation#cuda-compile
2. Use BNB_CUDA_VERSION to specify a DIFFERENT CUDA version from the detected one, which is installed on your machine and matching an available pre-compiled version listed above

Original error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda118.so

🔍 Run this command for detailed diagnostics:
python -m bitsandbytes

If you've tried everything and still have issues:
1. Include ALL version info (operating system, bitsandbytes, pytorch, cuda, python)
2. Describe what you've tried in detail
3. Open an issue with this information:
   https://github.com/bitsandbytes-foundation/bitsandbytes/issues

Native code method attempted to access: lib.cquantize_blockwise_fp16_nf4()

case 4b - custom BNB_CUDA_VERSION=124 and cuda detected

only difference is in this part

🚨 CUDA VERSION MISMATCH 🚨
Requested CUDA version:          12.4
Detected PyTorch CUDA version:   11.8
Available pre-compiled versions: 
  - 12.3

@Titus-von-Koeller
Copy link
Collaborator Author

case 5 - CPU-only installation attempts when GPU functionality is requested

❯ python -c 'from bitsandbytes.cextension import lib; lib.cquantize_blockwise_fp16_nf4()'
WARNING: BNB_CUDA_VERSION=124 environment variable detected; loading libbitsandbytes_cuda124.so.
This can be used to load a bitsandbytes version that is different from the PyTorch CUDA version.
If this was unintended set the BNB_CUDA_VERSION variable to an empty string: export BNB_CUDA_VERSION=
If you use the manual override make sure the right libcudart.so is in your LD_LIBRARY_PATH
For example by adding the following to your .bashrc: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path_to_cuda_dir/lib64

bitsandbytes library load error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda124.so
Traceback (most recent call last):
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 281, in <module>
    lib = get_native_library()
          ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 261, in get_native_library
    raise RuntimeError(f"Configured CUDA binary not found at {cuda_binary_path}")
RuntimeError: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda124.so
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 239, in throw_on_call
    raise RuntimeError(
RuntimeError: 
🚨 Forgot to compile the bitsandbytes library? 🚨
1. You're not using the package but checked-out the source code
2. You MUST compile from source

Attempted to use bitsandbytes native library functionality but it's not available.

This typically happens when:
1. bitsandbytes doesn't ship with a pre-compiled binary for your CUDA version
2. The library wasn't compiled properly during installation from source

To make bitsandbytes work, the compiled library version MUST exactly match the linked CUDA version.
If your CUDA version doesn't have a pre-compiled binary, you MUST compile from source.

You have two options:
1. COMPILE FROM SOURCE (required if no binary exists):
   https://huggingface.co/docs/bitsandbytes/main/en/installation#cuda-compile
2. Use BNB_CUDA_VERSION to specify a DIFFERENT CUDA version from the detected one, which is installed on your machine and matching an available pre-compiled version listed above

Original error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda124.so

🔍 Run this command for detailed diagnostics:
python -m bitsandbytes

If you've tried everything and still have issues:
1. Include ALL version info (operating system, bitsandbytes, pytorch, cuda, python)
2. Describe what you've tried in detail
3. Open an issue with this information:
   https://github.com/bitsandbytes-foundation/bitsandbytes/issues

Native code method attempted to call: lib.cquantize_blockwise_fp16_nf4()

@Titus-von-Koeller
Copy link
Collaborator Author

CPU only error handling:

❯ python -c 'from bitsandbytes.cextension import lib; lib.cquantize_blockwise_cpu_fp32'
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.

ubuntu in 🌐 ip-10-90-0-110 in bnb/bitsandbytes on  sensible-error-on-failed-lib-loading [!?⇡] via 🐍 v3.11.11 via 🅒 bnb 
❯ python -c 'from bitsandbytes.cextension import lib; lib.cquantize_blockwise_fp16_nf4()'
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 52, in throw_on_call
    raise RuntimeError(
RuntimeError: Method 'cquantize_blockwise_fp16_nf4' not available in CPU-only version of bitsandbytes.
Reinstall with GPU support or use CUDA-enabled hardware.

@Titus-von-Koeller
Copy link
Collaborator Author

updated python -m bitsandbytes output:

❯ python -m bitsandbytes
bitsandbytes library load error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda118.so
Traceback (most recent call last):
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 287, in <module>
    lib = get_native_library()
          ^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 267, in get_native_library
    raise RuntimeError(f"Configured CUDA binary not found at {cuda_binary_path}")
RuntimeError: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda118.so
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
CUDA specs: CUDASpecs(highest_compute_capability=(8, 9), cuda_version_string='118', cuda_version_tuple=(11, 8))
PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: (8, 9).
Library not found: /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda118.so. Maybe you need to compile it from source?
If you compiled from source, try again with `make CUDA_VERSION=DETECTED_CUDA_VERSION`,
for example, `make CUDA_VERSION=113`.

The CUDA version for the compile might depend on your conda install, if using conda.
Inspect CUDA version via `conda list | grep cuda`.
To manually override the PyTorch CUDA version please see: https://github.com/TimDettmers/bitsandbytes/blob/main/docs/source/nonpytorchcuda.mdx
Found duplicate CUDA runtime files (see below).

We select the PyTorch default CUDA runtime, which is 11.8,
but this might mismatch with the CUDA version that is needed for bitsandbytes.
To override this behavior set the `BNB_CUDA_VERSION=<version string, e.g. 122>` environmental variable.

For example, if you want to use the CUDA version 122,
    BNB_CUDA_VERSION=122 python ...

OR set the environmental variable in your .bashrc:
    export BNB_CUDA_VERSION=122

In the case of a manual override, make sure you set LD_LIBRARY_PATH, e.g.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2,
* Found CUDA runtime at: /home/ubuntu/cuda/cuda-12.4/lib64/libcudart.so.12.4.127
* Found CUDA runtime at: /home/ubuntu/cuda/cuda-12.4/lib64/libcudart.so
* Found CUDA runtime at: /home/ubuntu/cuda/cuda-12.4/lib64/libcudart.so.12
* Found CUDA runtime at: /home/ubuntu/cuda/cuda-12.4/lib64/libcudart.so.12.4.127
* Found CUDA runtime at: /home/ubuntu/cuda/cuda-12.4/lib64/libcudart.so
* Found CUDA runtime at: /home/ubuntu/cuda/cuda-12.4/lib64/libcudart.so.12
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and CUDA is callable...
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/ubuntu/src/bnb/bitsandbytes/__main__.py", line 4, in <module>
    main()
  File "/home/ubuntu/src/bnb/bitsandbytes/diagnostics/main.py", line 63, in main
    raise e
  File "/home/ubuntu/src/bnb/bitsandbytes/diagnostics/main.py", line 51, in main
    sanity_check()
  File "/home/ubuntu/src/bnb/bitsandbytes/diagnostics/main.py", line 25, in sanity_check
    adam.step()
  File "/home/ubuntu/.condax/mamba/envs/bnb/lib/python3.11/site-packages/torch/optim/optimizer.py", line 485, in wrapper
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/.condax/mamba/envs/bnb/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/optim/optimizer.py", line 291, in step
    self.update_step(group, p, gindex, pindex)
  File "/home/ubuntu/.condax/mamba/envs/bnb/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/bnb/bitsandbytes/optim/optimizer.py", line 521, in update_step
    F.optimizer_update_32bit(
  File "/home/ubuntu/src/bnb/bitsandbytes/functional.py", line 1329, in optimizer_update_32bit
    optim_func(
  File "/home/ubuntu/src/bnb/bitsandbytes/cextension.py", line 248, in throw_on_call
    raise RuntimeError(f"{self.formatted_error}Native code method attempted to call: lib.{name}()")
RuntimeError: 
🚨 CUDA VERSION MISMATCH 🚨
Requested CUDA version:          11.8
Detected PyTorch CUDA version:   11.8
Available pre-compiled versions: 
  - 12.4

This means:
The version you're trying to use is NOT distributed with this package

Attempted to use bitsandbytes native library functionality but it's not available.

This typically happens when:
1. bitsandbytes doesn't ship with a pre-compiled binary for your CUDA version
2. The library wasn't compiled properly during installation from source

To make bitsandbytes work, the compiled library version MUST exactly match the linked CUDA version.
If your CUDA version doesn't have a pre-compiled binary, you MUST compile from source.

You have two options:
1. COMPILE FROM SOURCE (required if no binary exists):
   https://huggingface.co/docs/bitsandbytes/main/en/installation#cuda-compile
2. Use BNB_CUDA_VERSION to specify a DIFFERENT CUDA version from the detected one, which is installed on your machine and matching an available pre-compiled version listed above

Original error: Configured CUDA binary not found at /home/ubuntu/src/bnb/bitsandbytes/libbitsandbytes_cuda118.so

🔍 Run this command for detailed diagnostics:
python -m bitsandbytes

If you've tried everything and still have issues:
1. Include ALL version info (operating system, bitsandbytes, pytorch, cuda, python)
2. Describe what you've tried in detail
3. Open an issue with this information:
   https://github.com/bitsandbytes-foundation/bitsandbytes/issues

Native code method attempted to call: lib.cadam32bit_grad_fp32()

@Titus-von-Koeller Titus-von-Koeller merged commit 4fb52dc into main May 9, 2025
70 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve stack trace when C library does not load

2 participants