Skip to content

[1/N]Port 3 distributed/_tools test cases to Intel GPU #159543

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

libohao1201
Copy link

@libohao1201 libohao1201 commented Jul 31, 2025

For #114850, we will port distributed tests to Intel GPU.

We could enable Intel GPU with following methods and try the best to keep the original code styles:

  1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
  2. enabled XPU for some test path
  3. skip some test cases which Intel GPU does not support

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta

Copy link

pytorch-bot bot commented Jul 31, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159543

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 63cf20f with merge base fc80f68 (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link

linux-foundation-easycla bot commented Jul 31, 2025

CLA Not Signed

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue topic: not user facing topic category labels Jul 31, 2025

@skipIfTorchDynamo("https://github.com/pytorch/pytorch/issues/115653")
@unittest.skipIf(not TEST_CUDA, "CUDA not available")
@unittest.skipIf(not torch.accelerator.is_available(), "Accelerator not available")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@unittest.skipIf(not TEST_CUDA and not TEST_XPU, "Neither CUDA or XPU is not available")

@@ -77,17 +77,18 @@ def _test_tracker_multi_group(
mp_policy: MixedPrecisionPolicy,
):
debug = False
dev = torch.device(torch.cuda.current_device())
dev = torch.device(torch.accelerator.current_device_index())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.acceleartor does not apply to cpu.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I think the cpu has been skipped by @skip_if_lt_x_gpu(2).

Copy link
Collaborator

@guangyey guangyey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I introduce a few memory-related APIs under torch.accelerator in #152932
We could use torch.accelerator APIs instead of get_device_module once #152932 landed.

@guangyey guangyey changed the title [WIP][1/N]Port 3 distributed/_tools test cases to Intel GPU [1/N]Port 3 distributed/_tools test cases to Intel GPU Aug 5, 2025
@guangyey guangyey requested a review from d4l3k August 5, 2025 08:16
@guangyey guangyey added the ciflow/xpu Run XPU CI tasks label Aug 5, 2025
@guangyey guangyey moved this to Review Required in PyTorch Intel Aug 5, 2025
Copy link
Member

@d4l3k d4l3k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@guangyey
Copy link
Collaborator

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support
@pytorchmergebot
Copy link
Collaborator

Successfully rebased libo/distributed_ut_p1 onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout libo/distributed_ut_p1 && git pull --rebase)

@pytorch-bot pytorch-bot bot removed the ciflow/xpu Run XPU CI tasks label Aug 12, 2025
@guangyey guangyey added the ciflow/xpu Run XPU CI tasks label Aug 12, 2025
@guangyey guangyey moved this from Review Required to Approved in PyTorch Intel Aug 12, 2025
@guangyey
Copy link
Collaborator

@libohao1201 please help fix the lint error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/xpu Run XPU CI tasks oncall: distributed Add this issue/PR to distributed oncall triage queue open source topic: not user facing topic category
Projects
Status: Approved
Development

Successfully merging this pull request may close these issues.

6 participants