-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
🐛 Bug
When trying to build pytorch from sources with a c++ toolchain nvcc10.2 + gcc8 + libc++ I'm getting a compiler error
bazel-out/k8-fastbuild-gcc/bin/external/pytorch/aten/src/ATen/native/cpu/MultinomialKernel.cpp.DEFAULT.cpp:52:7: required from 'void at::native::{anonymous}::multinomial_apply(at::Tensor&, const at::Tensor&, int64_t, bool, c10::optional<at::Generator>) [with scalar_t = c10::Half; int64_t = long int]'
bazel-out/k8-fastbuild-gcc/bin/external/pytorch/aten/src/ATen/native/cpu/MultinomialKernel.cpp.DEFAULT.cpp:134:3: required from here
/opt/sysroot/opt/libcxx/include/c++/v1/math.h:449:1: error: no type named 'type' in 'struct std::__1::enable_if<false, bool>'
The code that produces the problem is here
pytorch/aten/src/ATen/native/cpu/MultinomialKernel.cpp
Lines 52 to 58 in deb74ed
| #if defined(__clang__) | |
| TORCH_CHECK(std::isfinite(static_cast<double>(val)), | |
| "invalid multinomial distribution (encountering probability entry = infinity or NaN)"); | |
| #else | |
| TORCH_CHECK(std::isfinite(val), | |
| "invalid multinomial distribution (encountering probability entry = infinity or NaN)"); | |
| #endif |
To Reproduce
Steps to reproduce the behavior:
Sorry, don't have a good repro with upstream pytorch, since our build uses a custom toolchain and the setup is quite elaborated.
Expected behavior
There should not be a compile error.
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (
conda,pip, source): - Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
Additional context
The problem is that performed checks assumes that libc++ is used with clang only, while gcc+libc++ could be used as well.
I'm going to send a PR with a proposed fix shortly.