Skip to content

Conversation

@vishwakftw
Copy link
Contributor

…ngular_solve

Changelog:

  • Iterate over mini batches of 65535 matrices (maximum)

Test Plan:

  • Added slow tests to test the behavior in test_torch and test_cuda

Fixes #21643 and fixes #13276

…ngular_solve

Changelog:
- Iterate over mini batches of 65535 matrices (maximum)

Test Plan:
- Added slow tests to test the behavior in test_torch and test_cuda
@pytorchbot pytorchbot added module: cuda Related to torch.cuda, and CUDA support in general module: operators labels Jun 12, 2019
@vishwakftw
Copy link
Contributor Author

cc: @YurongYou @MarcoForte

@soumith
Copy link
Contributor

soumith commented Jun 12, 2019

test failures are legit

@vishwakftw
Copy link
Contributor Author

@soumith all CUDA tests pass.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@soumith
Copy link
Contributor

soumith commented Jun 13, 2019

thank you!

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 13, 2019
…… (#21689)

Summary:
…ngular_solve

Changelog:
- Iterate over mini batches of 65535 matrices (maximum)
Pull Request resolved: pytorch/pytorch#21689

Differential Revision: D15800254

Pulled By: soumith

fbshipit-source-id: c743ff13f1ba25d26874429d44e41a3c0ed21d6a
@facebook-github-bot
Copy link
Contributor

@soumith merged this pull request in 4c03ac7.

@vishwakftw vishwakftw deleted the super-large-batches-linalg branch June 26, 2019 10:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cuda Related to torch.cuda, and CUDA support in general open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

torch.solve in GPU fails when batch > 65535 Memory error for batched inverse

6 participants