-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Closed
Labels
low priorityWe're unlikely to get around to doing this in the near futureWe're unlikely to get around to doing this in the near futuremodule: 64-bitProblems related to incorrectly using 32-bit integers when 64-bit is needed (e.g., 8G tensors)Problems related to incorrectly using 32-bit integers when 64-bit is needed (e.g., 8G tensors)module: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: randomRelated to random number generation in PyTorch (rng generator)Related to random number generation in PyTorch (rng generator)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
torch.randn silently fails when the output has more than 2^31-1 elements. The output remains uninitialized (often all zeros).
To Reproduce
Note: you need >8 GB on your GPU to run this example
x = torch.randn(65536, 32768, device='cuda')
print(x)Environment
PyTorch master 066d158 (Mar 11, 2019)
CUDA 9.2.88
Metadata
Metadata
Assignees
Labels
low priorityWe're unlikely to get around to doing this in the near futureWe're unlikely to get around to doing this in the near futuremodule: 64-bitProblems related to incorrectly using 32-bit integers when 64-bit is needed (e.g., 8G tensors)Problems related to incorrectly using 32-bit integers when 64-bit is needed (e.g., 8G tensors)module: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: randomRelated to random number generation in PyTorch (rng generator)Related to random number generation in PyTorch (rng generator)triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module