Skip to content

Conversation

@colesbury
Copy link
Member

Fixes #1267

This fixes a number of issues when PyTorch was compiled with CUDA
support but run on a machine without any GPUs. Now, we treat all errors
from cudaGetDeviceCount() as if the machine has no devices.

Now all tests pass with:

CUDA_VISIBLE_DEVICES= ./run_test.sh

I also changed _cuda_init and _cuda_sparse_init to return None on success and raise an exception on failure. Previously, they set the exception but returned a Python bool False which isn't allowed.

We should probably add a smoke test for the PyTorch compiled with CUDA but run without any GPUs (or with no visible device).

Fixes pytorch#1267

This fixes a number of issues when PyTorch was compiled with CUDA
support but run on a machine without any GPUs. Now, we treat all errors
from cudaGetDeviceCount() as if the machine has no devices.
@soumith soumith merged commit aab30d4 into pytorch:master Apr 23, 2017
@colesbury colesbury deleted the cuda_device branch April 24, 2017 15:33
eqy pushed a commit to eqy/pytorch that referenced this pull request Jan 20, 2022
 This PR relaxes the constraint so that arbitrary padding sizes can be used as long as output domains don't get larger than input domains.
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this pull request Nov 1, 2022
* update ngc link and dockerhub container tag

* update

* update

* update

* Update README.md

Co-authored-by: Masaki Kozuki <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Handling the error message of cudaGetDeviceCount

2 participants