-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Fix sharing of CUDA tensors on non-current devices #1726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The correct device must be set when getting the base allocation and when calling cudaIpcCloseMemHandle. Store the device in the allocators context, which was previously always NULL. Fixes pytorch#1707
apaszke
approved these changes
Jun 5, 2017
torch/lib/THC/THCAllocator.c
Outdated
| if (err != cudaSuccess) { return err; } | ||
|
|
||
| err = cudaIpcCloseMemHandle(devPtr); | ||
| if (err != cudaSuccess) { return err; } |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Contributor
|
merged into master |
|
@soumith Hi, I just got the problem to share cuda tensors across subprocesses other than the first GPU. Is this solution included in the latest version of pytorch? I tried to update my pytorch, but still got this problem. |
houseroad
added a commit
to houseroad/pytorch
that referenced
this pull request
Jan 15, 2019
…827566 Summary: Previous import was 7abd834091f1024c11749dcfd25126802db9fdd5 Included changes: - **[84a0441](onnx/onnx@84a0441)**: Clarify namescopes in the presence of nested subgraphs (pytorch#1665) <G. Ramalingam> - **[118fec5](onnx/onnx@118fec5)**: Add Where op. (pytorch#1569) <Sergii Dymchenko> - **[beefa15](onnx/onnx@beefa15)**: Use strings directly for casing as np.object w/o redundant StringHolder. (pytorch#1736) <Dmitri Smirnov> - **[4023bae](onnx/onnx@4023bae)**: Add a capability to input/output unicode strings (pytorch#1734) <Dmitri Smirnov> - **[1a8a7fc](onnx/onnx@1a8a7fc)**: typos fixed: iutput -> input (pytorch#1726) <Beomsoo Kim> - **[0128478](onnx/onnx@0128478)**: Scan test update (pytorch#1732) <G. Ramalingam> - **[c6a24fd](onnx/onnx@c6a24fd)**: turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion difference (pytorch#1733) <Lu Fang> - **[5b7ac72](onnx/onnx@5b7ac72)**: Add Shrink operator (pytorch#1622) <Rui Zhu> Differential Revision: D13676711 fbshipit-source-id: 0b7b8a398afa4a3b54752fb792f19e7efca80f65
facebook-github-bot
pushed a commit
that referenced
this pull request
Jan 16, 2019
…827566 (#16046) Summary: Pull Request resolved: #16046 Previous import was 7abd834091f1024c11749dcfd25126802db9fdd5 Included changes: - **[84a0441](onnx/onnx@84a0441)**: Clarify namescopes in the presence of nested subgraphs (#1665) <G. Ramalingam> - **[118fec5](onnx/onnx@118fec5)**: Add Where op. (#1569) <Sergii Dymchenko> - **[beefa15](onnx/onnx@beefa15)**: Use strings directly for casing as np.object w/o redundant StringHolder. (#1736) <Dmitri Smirnov> - **[4023bae](onnx/onnx@4023bae)**: Add a capability to input/output unicode strings (#1734) <Dmitri Smirnov> - **[1a8a7fc](onnx/onnx@1a8a7fc)**: typos fixed: iutput -> input (#1726) <Beomsoo Kim> - **[0128478](onnx/onnx@0128478)**: Scan test update (#1732) <G. Ramalingam> - **[c6a24fd](onnx/onnx@c6a24fd)**: turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion difference (#1733) <Lu Fang> - **[5b7ac72](onnx/onnx@5b7ac72)**: Add Shrink operator (#1622) <Rui Zhu> Reviewed By: yinghai Differential Revision: D13676711 fbshipit-source-id: 513cc137223469b47af48919432aaecf58006012
jjsjann123
pushed a commit
to jjsjann123/pytorch
that referenced
this pull request
May 24, 2022
malfet
pushed a commit
that referenced
this pull request
Jun 8, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ A few bigger updates: 1. Initial support of cp.async and cp.async.wait: csarofeen#1619 2. Emulate ampere's mma 16816 with Turing's mma 1688, for a unified interface: csarofeen#1643 3. Extending the infrastructure to support mma operators on turing and ampere arch: csarofeen#1440 Commits that's actually in this PR from the csarofeen branch ``` * dd23252 (csarofeen/devel) Fusion Segmenter: Unify single kernel and multi-kernel runtime path (#1710) * b3d1c3f Fix missing cooperative launch (#1726) * dc670a2 Async gmem copy support on sm80+ (#1619) * 5e6a8da Add turing mma support and test (#1643) * d6d6b7d Fix rFactor when there are indirect root domain(s), and refactor (#1723) * 7093e39 Mma op integration on ampere (#1440) * fade8da patch python test for bfloat16 (#1724) * 8fbd0b1 Fine-grained kernel profiling (#1720) * 77c1b4f Adding dry run mode to skip arch dependent checks (#1702) * 151d95b More precise concretization analysis (#1719) * f4d3630 Enable complex python tests (#1667) * 4ceeee5 Minor bugfix in transform_rfactor.cpp (#1715) * 3675c70 Separate root domain and rfactor domain in TransformPrinter (#1716) * f68b830 Fix scheduling with polymorphic broadcast (#1714) * 4ab5ef7 updating_ci_machine (#1718) * 56585c5 Merge pull request #1711 from csarofeen/upstream_master_bump_0517 * 174d453 Allow using nvFuser on CUDA extension (#1701) * 18bee67 Validate LOOP concrete IDs have complete IterDomains (#1676) ``` Pull Request resolved: #78244 Approved by: https://github.com/csarofeen, https://github.com/malfet
facebook-github-bot
pushed a commit
that referenced
this pull request
Jun 8, 2022
Summary: Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/ A few bigger updates: 1. Initial support of cp.async and cp.async.wait: csarofeen#1619 2. Emulate ampere's mma 16816 with Turing's mma 1688, for a unified interface: csarofeen#1643 3. Extending the infrastructure to support mma operators on turing and ampere arch: csarofeen#1440 Commits that's actually in this PR from the csarofeen branch ``` * dd23252 (csarofeen/devel) Fusion Segmenter: Unify single kernel and multi-kernel runtime path (#1710) * b3d1c3f Fix missing cooperative launch (#1726) * dc670a2 Async gmem copy support on sm80+ (#1619) * 5e6a8da Add turing mma support and test (#1643) * d6d6b7d Fix rFactor when there are indirect root domain(s), and refactor (#1723) * 7093e39 Mma op integration on ampere (#1440) * fade8da patch python test for bfloat16 (#1724) * 8fbd0b1 Fine-grained kernel profiling (#1720) * 77c1b4f Adding dry run mode to skip arch dependent checks (#1702) * 151d95b More precise concretization analysis (#1719) * f4d3630 Enable complex python tests (#1667) * 4ceeee5 Minor bugfix in transform_rfactor.cpp (#1715) * 3675c70 Separate root domain and rfactor domain in TransformPrinter (#1716) * f68b830 Fix scheduling with polymorphic broadcast (#1714) * 4ab5ef7 updating_ci_machine (#1718) * 56585c5 Merge pull request #1711 from csarofeen/upstream_master_bump_0517 * 174d453 Allow using nvFuser on CUDA extension (#1701) * 18bee67 Validate LOOP concrete IDs have complete IterDomains (#1676) ``` Pull Request resolved: #78244 Reviewed By: ejguan Differential Revision: D36678948 Pulled By: davidberard98 fbshipit-source-id: 0ccde965acbd31da67d99c6adb2eaaa888948105
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The correct device must be set when getting the base allocation and when
calling cudaIpcCloseMemHandle. Store the device in the allocators
context, which was previously always NULL.
Fixes #1707