-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Add unified memory APIs for torch.accelerator #152932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152932
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 3 Unrelated FailuresAs of commit 63f2a36 with merge base 178515d ( NEW FAILURE - The following job has failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This reverts commit 2ad5c25. Reverted #152932 on behalf of https://github.com/ZainRizvi due to Very sorry but this is still breaking internally. @albanD would you be able to help get this past the finish line? D78496124 has more details on the failure and the workaround might be to do something like what's in D78684669. To validate the fixes internally, you can follow the instructions here to ghimport the changes: https://fburl.com/fixing-ghfirst-reverts ([comment](#138222 (comment)))
This reverts commit 2ad5c25. Reverted pytorch#152932 on behalf of https://github.com/ZainRizvi due to Very sorry but this is still breaking internally. @albanD would you be able to help get this past the finish line? D78496124 has more details on the failure and the workaround might be to do something like what's in D78684669. To validate the fixes internally, you can follow the instructions here to ghimport the changes: https://fburl.com/fixing-ghfirst-reverts ([comment](pytorch#138222 (comment)))
Starting merge as part of PR stack under #155200 |
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 7 checks: Check Labels / Check labels, Check mergeability of ghstack PR / ghstack-mergeability-check, pull / linux-jammy-py3_9-clang9-xla / test (xla, 1, 1, linux.12xlarge, unstable), xpu / linux-jammy-xpu-2025.1-py3.9 / test (default, 2, 6, linux.idc.xpu), xpu / linux-jammy-xpu-2025.1-py3.9 / test (default, 5, 6, linux.idc.xpu), rocm / linux-jammy-rocm-py3.10 / test (default, 2, 6, linux.rocm.gpu.2), rocm / linux-jammy-rocm-py3.10 / test (default, 1, 6, linux.rocm.gpu.2) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Starting merge as part of PR stack under #155200 |
Pull Request resolved: #155200 Approved by: https://github.com/albanD ghstack dependencies: #138222, #152932
This reverts commit 15f1173. Reverted #152932 on behalf of https://github.com/jithunnair-amd due to Broke ROCm periodic runs on MI300 e.g. https://github.com/pytorch/pytorch/actions/runs/16764977800/job/47470050573 ([comment](#138222 (comment)))
Starting merge as part of PR stack under #155200 |
Pull Request resolved: #155200 Approved by: https://github.com/albanD ghstack dependencies: #138222, #152932
# Motivation The following API will be put under torch.accelerator - empty_cache - max_memory_allocated - max_memory_reserved - memory_allocated - memory_reserved - memory_stats - reset_accumulated_memory_stats - reset_peak_memory_stats Pull Request resolved: pytorch#152932 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222
Pull Request resolved: pytorch#155200 Approved by: https://github.com/albanD ghstack dependencies: pytorch#138222, pytorch#152932
Stack from ghstack (oldest at bottom):
Motivation
The following API will be put under torch.accelerator
cc @albanD @EikanWang