-
Notifications
You must be signed in to change notification settings - Fork 26.2k
Closed
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: cudnnRelated to torch.backends.cudnn, and CuDNN supportRelated to torch.backends.cudnn, and CuDNN supporttriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Hi,
I am running SegNet on TX1 and I get this error:
RuntimeError: cuda runtime error (7) : too many resources requested for launch at /home/nvidia/pytorch/torch/lib/THCUNN/generic/SpatialUpSamplingBilinear.cu:63
I want to know how to fix it ?
I tried to add 'launch_bounds(1024)' in '/home/nvidia/pytorch/torch/lib/THCUNN/generic/SpatialUpSamplingBilinear.cu ', but it dose not work.
What is difference between '/pytorch/torch/lib/THCUNN/generic/xx.cu' and '/pytorch/torch/lib/THCUNN/xx.cu' ?
- PyTorch : v0.3.0
- install PyTorch : source
- Build command : python setup.py build_deps
sudo python setup.py develop - Python version: 2.7
- CUDA/cuDNN version: CUDA9.0/cuDNN 7.0
- GPU models and configuration: Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024 - GCC version : 5.4.0
- CMake version:3.11.3
Metadata
Metadata
Assignees
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: cudnnRelated to torch.backends.cudnn, and CuDNN supportRelated to torch.backends.cudnn, and CuDNN supporttriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module