Skip to content

Conversation

@libohao1201
Copy link
Contributor

@libohao1201 libohao1201 commented Jul 31, 2025

For #114850, we will port distributed tests to Intel GPU.

We could enable Intel GPU with following methods and try the best to keep the original code styles:

  1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
  2. enabled XPU for some test path
  3. skip some test cases which Intel GPU does not support

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support
@pytorch-bot
Copy link

pytorch-bot bot commented Jul 31, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159543

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 4995377 with merge base fc80f68 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jul 31, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue topic: not user facing topic category labels Jul 31, 2025

@skipIfTorchDynamo("https://github.com/pytorch/pytorch/issues/115653")
@unittest.skipIf(not TEST_CUDA, "CUDA not available")
@unittest.skipIf(not torch.accelerator.is_available(), "Accelerator not available")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@unittest.skipIf(not TEST_CUDA and not TEST_XPU, "Neither CUDA or XPU is not available")

):
debug = False
dev = torch.device(torch.cuda.current_device())
dev = torch.device(torch.accelerator.current_device_index())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.acceleartor does not apply to cpu.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I think the cpu has been skipped by @skip_if_lt_x_gpu(2).

Copy link
Collaborator

@guangyey guangyey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I introduce a few memory-related APIs under torch.accelerator in #152932
We could use torch.accelerator APIs instead of get_device_module once #152932 landed.

@guangyey guangyey changed the title [WIP][1/N]Port 3 distributed/_tools test cases to Intel GPU [1/N]Port 3 distributed/_tools test cases to Intel GPU Aug 5, 2025
@guangyey guangyey requested a review from d4l3k August 5, 2025 08:16
@guangyey guangyey added the ciflow/xpu Run XPU CI tasks label Aug 5, 2025
@guangyey guangyey moved this to Review Required in PyTorch Intel Aug 5, 2025
Copy link
Member

@d4l3k d4l3k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@guangyey
Copy link
Collaborator

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support
@pytorchmergebot
Copy link
Collaborator

Successfully rebased libo/distributed_ut_p1 onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout libo/distributed_ut_p1 && git pull --rebase)

@pytorch-bot pytorch-bot bot removed the ciflow/xpu Run XPU CI tasks label Aug 12, 2025
@guangyey guangyey added the ciflow/xpu Run XPU CI tasks label Aug 12, 2025
@guangyey guangyey moved this from Review Required to Approved in PyTorch Intel Aug 12, 2025
@guangyey
Copy link
Collaborator

@libohao1201 please help fix the lint error.

@pytorch-bot pytorch-bot bot removed the ciflow/xpu Run XPU CI tasks label Aug 13, 2025
@daisyden daisyden added the ciflow/xpu Run XPU CI tasks label Aug 13, 2025
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 13, 2025

To add the ciflow label ciflow/xpu please first approve the workflows that are awaiting approval (scroll to the bottom of this page).

This helps ensure we don't trigger CI on this PR until it is actually authorized to do so. Please ping one of the reviewers if you do not have access to approve and run workflows.

@pytorch-bot pytorch-bot bot removed the ciflow/xpu Run XPU CI tasks label Aug 13, 2025
@guangyey guangyey added the ciflow/xpu Run XPU CI tasks label Aug 13, 2025
@pytorch-bot pytorch-bot bot removed the ciflow/xpu Run XPU CI tasks label Aug 13, 2025
@guangyey guangyey added the ciflow/xpu Run XPU CI tasks label Aug 13, 2025
@guangyey
Copy link
Collaborator

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 13, 2025
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@guangyey
Copy link
Collaborator

@libohao1201 you need to sign EasyCLA before land this PR.

@libohao1201
Copy link
Contributor Author

@libohao1201 you need to sign EasyCLA before land this PR.

Done.

@guangyey
Copy link
Collaborator

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@github-project-automation github-project-automation bot moved this from Approved to Done in PyTorch Intel Aug 13, 2025
chuanhaozhuge pushed a commit that referenced this pull request Aug 14, 2025
For [#114850](#114850), we will port distributed tests to Intel GPU.

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support

Pull Request resolved: #159543
Approved by: https://github.com/guangyey, https://github.com/d4l3k

Co-authored-by: Yu, Guangye <[email protected]>
chuanhaozhuge pushed a commit that referenced this pull request Aug 18, 2025
For [#114850](#114850), we will port distributed tests to Intel GPU.

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support

Pull Request resolved: #159543
Approved by: https://github.com/guangyey, https://github.com/d4l3k

Co-authored-by: Yu, Guangye <[email protected]>
can-gaa-hou pushed a commit to can-gaa-hou/pytorch that referenced this pull request Aug 22, 2025
For [pytorch#114850](pytorch#114850), we will port distributed tests to Intel GPU.

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support

Pull Request resolved: pytorch#159543
Approved by: https://github.com/guangyey, https://github.com/d4l3k

Co-authored-by: Yu, Guangye <[email protected]>
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
For [pytorch#114850](pytorch#114850), we will port distributed tests to Intel GPU.

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support

Pull Request resolved: pytorch#159543
Approved by: https://github.com/guangyey, https://github.com/d4l3k

Co-authored-by: Yu, Guangye <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request ciflow/xpu Run XPU CI tasks Merged oncall: distributed Add this issue/PR to distributed oncall triage queue open source topic: not user facing topic category

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

6 participants