Skip to content

Conversation

@fduwjj
Copy link
Contributor

@fduwjj fduwjj commented Aug 16, 2024

Stack from ghstack (oldest at bottom):

@zdevito added a cache for CudaEvent in #122732. And we want to productionize it with a flag in this PR.

cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @wz337 @wconstab @d4l3k @c-p-i-o @xmfan

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 16, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/133727

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 47c8e05 with merge base e1b9b89 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category labels Aug 16, 2024
fduwjj added a commit that referenced this pull request Aug 16, 2024
ghstack-source-id: f1edc84
Pull Request resolved: #133727
@fduwjj fduwjj marked this pull request as draft August 16, 2024 23:29
cc XilunWu H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
@fduwjj fduwjj marked this pull request as ready for review August 19, 2024 19:41
@wconstab
Copy link
Contributor

cc @eqy any potential issues with long-term reuse of CUDA events? one motivation for this PR is to avoid the case where ~CudaEvent causes a hang.

@fduwjj fduwjj added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 19, 2024
zdevito added a cache for CudaEvent in #122732. And we want to productionize it with a flag in this PR.

cc XilunWu H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
fduwjj added a commit that referenced this pull request Aug 19, 2024
ghstack-source-id: 4e963e4
Pull Request resolved: #133727
@fduwjj
Copy link
Contributor Author

fduwjj commented Aug 19, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Copy link
Collaborator

@kwen2501 kwen2501 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Added two minor comments.

const std::optional<std::vector<at::Tensor>>& inputs,
bool desyncDebug,
bool enableTiming,
bool cudaEventCacheEnabled,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe we can make WorkNCCL a friend of ProcessGroupNCCL so that it can access ProcessGroupNCCL. cudaEventCacheEnabled_ and we don't have to pass every flag to the constructor of WorkNCCL?

Comment on lines +766 to +792
std::shared_ptr<at::cuda::CUDAEvent> ProcessGroupNCCL::CUDAEventCache::create(
bool timing) {
auto deleter = [this, timing](at::cuda::CUDAEvent* event) {
std::lock_guard<std::mutex> lock(this->cacheMutex_);
this->eventsArray_[timing ? 1 : 0].push_back(event);
};
at::cuda::CUDAEvent* event = nullptr;
{
std::lock_guard<std::mutex> lock(cacheMutex_);
auto events = eventsArray_[timing ? 1 : 0];
if (!events.empty()) {
event = events.back();
events.pop_back();
}
}
if (!event) {
event = new at::cuda::CUDAEvent(
timing ? cudaEventDefault : cudaEventDisableTiming);
}
return std::shared_ptr<at::cuda::CUDAEvent>(event, std::move(deleter));
}

ProcessGroupNCCL::CUDAEventCache& ProcessGroupNCCL::CUDAEventCache::get() {
static ProcessGroupNCCL::CUDAEventCache cache;
return cache;
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can you add some comments for this block of code? Thanks!

@github-actions github-actions bot deleted the gh/fduwjj/116/head branch September 28, 2024 02:08
fduwjj added a commit that referenced this pull request Nov 19, 2024
We added `CudaEventCache` in #133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

cc H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
fduwjj added a commit that referenced this pull request Nov 20, 2024
We added `CudaEventCache` in #133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

cc H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
fduwjj added a commit that referenced this pull request Nov 21, 2024
…ce support"


We added `CudaEventCache` in #133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

Also we observed some multi-device use cases in OSS, so that we want to bring back multi-device support originally proposed in https://github.com/pytorch/pytorch/pull/122732/files.

cc H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
fduwjj added a commit that referenced this pull request Nov 25, 2024
…ce support"


We added `CudaEventCache` in #133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

Also we observed some multi-device use cases in OSS, so that we want to bring back multi-device support originally proposed in https://github.com/pytorch/pytorch/pull/122732/files.

cc H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
fduwjj added a commit that referenced this pull request Nov 25, 2024
…ce support"


We added `CudaEventCache` in #133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

Also we observed some multi-device use cases in OSS, so that we want to bring back multi-device support originally proposed in https://github.com/pytorch/pytorch/pull/122732/files.

cc H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
fduwjj added a commit that referenced this pull request Nov 25, 2024
…ce support"


We added `CudaEventCache` in #133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

Also we observed some multi-device use cases in OSS, so that we want to bring back multi-device support originally proposed in https://github.com/pytorch/pytorch/pull/122732/files.

cc H-Huang awgu kwen2501 wanchaol fegin wz337 wconstab d4l3k c-p-i-o

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Nov 26, 2024
…140975)

We added `CudaEventCache` in #133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

Also we observed some multi-device use cases in OSS, so that we want to bring back multi-device support originally proposed in https://github.com/pytorch/pytorch/pull/122732/files.

Pull Request resolved: #140975
Approved by: https://github.com/eqy, https://github.com/kwen2501
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
…ytorch#140975)

We added `CudaEventCache` in pytorch#133727 and this is a feature which tries to reuse CudaEvent so that we don't call destroy of CudaEvent which causes hang in the past. We had a bunch of tests and testing on TorchTitan and internal workload already. So far no errors or crash are found at the moment so we decide to roll out to all OSS users. For internal workload, this PR would not affect it because of some internal gating.

Also we observed some multi-device use cases in OSS, so that we want to bring back multi-device support originally proposed in https://github.com/pytorch/pytorch/pull/122732/files.

Pull Request resolved: pytorch#140975
Approved by: https://github.com/eqy, https://github.com/kwen2501
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants