Skip to content

Conversation

@vors
Copy link
Contributor

@vors vors commented Feb 11, 2021

Fixes #52163

The libc++ vs libstdc++ detection in the pre-processor is taken from https://stackoverflow.com/questions/31657499/how-to-detect-stdlib-libc-in-the-preprocessor

Note that in our case std:isinfinite presents means that we don't need to import any additional headers to guarantee the _LIBCPP_VERSION presents for the libc++.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 11, 2021

💊 CI failures summary and remediations

As of commit 84bff65 (more details on the Dr. CI page):



🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_build (1/1)

Step: "Install Cuda" (full log | diagnosis details | 🔁 rerun)

ls: cannot access '/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe': No such file or directory

Folders: 11
Files: 130
Size:       907512
Compressed: 111420
+ mkdir -p 'C:/Program Files/NVIDIA Corporation/NvToolsExt'
+ cp -r NvToolsExt/bin NvToolsExt/docs NvToolsExt/include NvToolsExt/lib NvToolsExt/samples 'C:/Program Files/NVIDIA Corporation/NvToolsExt/'
+ export 'NVTOOLSEXT_PATH=C:\Program Files\NVIDIA Corporation\NvToolsExt\'
+ NVTOOLSEXT_PATH='C:\Program Files\NVIDIA Corporation\NvToolsExt\'
+ ls '/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe'
ls: cannot access '/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe': No such file or directory
+ echo 'CUDA installation failed'
CUDA installation failed
+ mkdir -p /c/w/build-results
+ 7z a 'c:\w\build-results\cuda_install_logs.7z' cuda_install_logs

7-Zip 19.00 (x64) : Copyright (c) 1999-2018 Igor Pavlov : 2019-02-21

Scanning the drive:
1 folder, 2 files, 3721951 bytes (3635 KiB)


🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@vors
Copy link
Contributor Author

vors commented Feb 12, 2021

Interestingly enough, found a PR about a similar issues in pybind11 that pytorch depends on pybind/pybind11#2569

@facebook-github-bot
Copy link
Contributor

@malfet merged this pull request in df837d0.

xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
…nite (pytorch#52164)

Summary:
Fixes pytorch#52163

The libc++ vs libstdc++ detection in the pre-processor is taken from https://stackoverflow.com/questions/31657499/how-to-detect-stdlib-libc-in-the-preprocessor

Note that in our case `std:isinfinite` presents means that we don't need to import any additional headers to guarantee the `_LIBCPP_VERSION` presents for the `libc++`.

Pull Request resolved: pytorch#52164

Reviewed By: albanD

Differential Revision: D26413108

Pulled By: malfet

fbshipit-source-id: 515e258d6758222c910ababf5172c3a275aff08f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Compiler error in MultinomialKernel.cpp when building with gcc + libc++

4 participants