Skip to content

Conversation

@colesbury
Copy link
Member

The correct device must be set when getting the base allocation and when
calling cudaIpcCloseMemHandle. Store the device in the allocators
context, which was previously always NULL.

Fixes #1707

The correct device must be set when getting the base allocation and when
calling cudaIpcCloseMemHandle. Store the device in the allocators
context, which was previously always NULL.

Fixes pytorch#1707
if (err != cudaSuccess) { return err; }

err = cudaIpcCloseMemHandle(devPtr);
if (err != cudaSuccess) { return err; }

This comment was marked as off-topic.

@soumith
Copy link
Contributor

soumith commented Jun 5, 2017

merged into master

@soumith soumith closed this Jun 5, 2017
@colesbury colesbury deleted the cuda_ipc_device branch June 5, 2017 18:02
@mingzhew
Copy link

@soumith Hi, I just got the problem to share cuda tensors across subprocesses other than the first GPU. Is this solution included in the latest version of pytorch? I tried to update my pytorch, but still got this problem.

houseroad added a commit to houseroad/pytorch that referenced this pull request Jan 15, 2019
…827566

Summary:
Previous import was 7abd834091f1024c11749dcfd25126802db9fdd5

Included changes:
- **[84a0441](onnx/onnx@84a0441)**: Clarify namescopes in the presence of nested subgraphs (pytorch#1665) <G. Ramalingam>
- **[118fec5](onnx/onnx@118fec5)**: Add Where op. (pytorch#1569) <Sergii Dymchenko>
- **[beefa15](onnx/onnx@beefa15)**: Use strings directly for casing as np.object w/o redundant StringHolder. (pytorch#1736) <Dmitri Smirnov>
- **[4023bae](onnx/onnx@4023bae)**: Add a capability to input/output unicode strings (pytorch#1734) <Dmitri Smirnov>
- **[1a8a7fc](onnx/onnx@1a8a7fc)**: typos fixed: iutput -> input (pytorch#1726) <Beomsoo Kim>
- **[0128478](onnx/onnx@0128478)**: Scan test update (pytorch#1732) <G. Ramalingam>
- **[c6a24fd](onnx/onnx@c6a24fd)**: turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion difference (pytorch#1733) <Lu Fang>
- **[5b7ac72](onnx/onnx@5b7ac72)**: Add Shrink operator (pytorch#1622) <Rui Zhu>

Differential Revision: D13676711

fbshipit-source-id: 0b7b8a398afa4a3b54752fb792f19e7efca80f65
facebook-github-bot pushed a commit that referenced this pull request Jan 16, 2019
…827566 (#16046)

Summary:
Pull Request resolved: #16046

Previous import was 7abd834091f1024c11749dcfd25126802db9fdd5

Included changes:
- **[84a0441](onnx/onnx@84a0441)**: Clarify namescopes in the presence of nested subgraphs (#1665) <G. Ramalingam>
- **[118fec5](onnx/onnx@118fec5)**: Add Where op. (#1569) <Sergii Dymchenko>
- **[beefa15](onnx/onnx@beefa15)**: Use strings directly for casing as np.object w/o redundant StringHolder. (#1736) <Dmitri Smirnov>
- **[4023bae](onnx/onnx@4023bae)**: Add a capability to input/output unicode strings (#1734) <Dmitri Smirnov>
- **[1a8a7fc](onnx/onnx@1a8a7fc)**: typos fixed: iutput -> input (#1726) <Beomsoo Kim>
- **[0128478](onnx/onnx@0128478)**: Scan test update (#1732) <G. Ramalingam>
- **[c6a24fd](onnx/onnx@c6a24fd)**: turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion difference (#1733) <Lu Fang>
- **[5b7ac72](onnx/onnx@5b7ac72)**: Add Shrink operator (#1622) <Rui Zhu>

Reviewed By: yinghai

Differential Revision: D13676711

fbshipit-source-id: 513cc137223469b47af48919432aaecf58006012
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request May 24, 2022
malfet pushed a commit that referenced this pull request Jun 8, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

A few bigger updates:
1. Initial support of cp.async and cp.async.wait: csarofeen#1619
2. Emulate ampere's mma 16816 with Turing's mma 1688, for a unified interface: csarofeen#1643
3. Extending the infrastructure to support mma operators on turing and ampere arch: csarofeen#1440

Commits that's actually in this PR from the csarofeen branch
```
* dd23252 (csarofeen/devel) Fusion Segmenter: Unify single kernel and multi-kernel runtime path (#1710)
* b3d1c3f Fix missing cooperative launch (#1726)
* dc670a2 Async gmem copy support on sm80+ (#1619)
* 5e6a8da Add turing mma support and test (#1643)
* d6d6b7d Fix rFactor when there are indirect root domain(s), and refactor (#1723)
* 7093e39 Mma op integration on ampere (#1440)
* fade8da patch python test for bfloat16 (#1724)
* 8fbd0b1 Fine-grained kernel profiling (#1720)
* 77c1b4f Adding dry run mode to skip arch dependent checks (#1702)
* 151d95b More precise concretization analysis (#1719)
* f4d3630 Enable complex python tests (#1667)
* 4ceeee5 Minor bugfix in transform_rfactor.cpp (#1715)
* 3675c70 Separate root domain and rfactor domain in TransformPrinter (#1716)
* f68b830 Fix scheduling with polymorphic broadcast (#1714)
* 4ab5ef7 updating_ci_machine (#1718)
* 56585c5 Merge pull request #1711 from csarofeen/upstream_master_bump_0517
* 174d453 Allow using nvFuser on CUDA extension (#1701)
* 18bee67 Validate LOOP concrete IDs have complete IterDomains (#1676)
```
Pull Request resolved: #78244
Approved by: https://github.com/csarofeen, https://github.com/malfet
facebook-github-bot pushed a commit that referenced this pull request Jun 8, 2022
Summary:
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

A few bigger updates:
1. Initial support of cp.async and cp.async.wait: csarofeen#1619
2. Emulate ampere's mma 16816 with Turing's mma 1688, for a unified interface: csarofeen#1643
3. Extending the infrastructure to support mma operators on turing and ampere arch: csarofeen#1440

Commits that's actually in this PR from the csarofeen branch
```
* dd23252 (csarofeen/devel) Fusion Segmenter: Unify single kernel and multi-kernel runtime path (#1710)
* b3d1c3f Fix missing cooperative launch (#1726)
* dc670a2 Async gmem copy support on sm80+ (#1619)
* 5e6a8da Add turing mma support and test (#1643)
* d6d6b7d Fix rFactor when there are indirect root domain(s), and refactor (#1723)
* 7093e39 Mma op integration on ampere (#1440)
* fade8da patch python test for bfloat16 (#1724)
* 8fbd0b1 Fine-grained kernel profiling (#1720)
* 77c1b4f Adding dry run mode to skip arch dependent checks (#1702)
* 151d95b More precise concretization analysis (#1719)
* f4d3630 Enable complex python tests (#1667)
* 4ceeee5 Minor bugfix in transform_rfactor.cpp (#1715)
* 3675c70 Separate root domain and rfactor domain in TransformPrinter (#1716)
* f68b830 Fix scheduling with polymorphic broadcast (#1714)
* 4ab5ef7 updating_ci_machine (#1718)
* 56585c5 Merge pull request #1711 from csarofeen/upstream_master_bump_0517
* 174d453 Allow using nvFuser on CUDA extension (#1701)
* 18bee67 Validate LOOP concrete IDs have complete IterDomains (#1676)
```

Pull Request resolved: #78244

Reviewed By: ejguan

Differential Revision: D36678948

Pulled By: davidberard98

fbshipit-source-id: 0ccde965acbd31da67d99c6adb2eaaa888948105
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants