-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Revert "[inductor] add lowering for repeat_interleave.Tensor with output size specified (#147160) (#158462)" #159798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "[inductor] add lowering for repeat_interleave.Tensor with output size specified (#147160) (#158462)" #159798
Conversation
…put size specified (#147160) (#158462)" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159798
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 81 PendingAs of commit 8f77523 with merge base fb8f32e ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…put size specified (#147160) (#158462)" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` ghstack-source-id: 09717d5 Pull Request resolved: #159798
@davidberard98 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…or with output size specified (#147160) (#158462)"" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` which leads to errors like ``` /tmp/torchinductor_dberard/3h/c3hlb22fpptebupstsuhl6kexa6z3upgbnyxln7c24gfcr5747iu.py:30: unknown: block: [0,0,0], thread: [10,0,0] Assertion `index out of bounds: 0 <= tmp5 < 4` failed. ``` cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Differential Revision: [D79591561](https://our.internmc.facebook.com/intern/diff/D79591561) [ghstack-poisoned]
…put size specified (#147160) (#158462)" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` ghstack-source-id: 1ff778e Pull Request resolved: #159798
…or with output size specified (#147160) (#158462)"" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` which leads to errors like ``` /tmp/torchinductor_dberard/3h/c3hlb22fpptebupstsuhl6kexa6z3upgbnyxln7c24gfcr5747iu.py:30: unknown: block: [0,0,0], thread: [10,0,0] Assertion `index out of bounds: 0 <= tmp5 < 4` failed. ``` cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Differential Revision: [D79591561](https://our.internmc.facebook.com/intern/diff/D79591561) [ghstack-poisoned]
…put size specified (#147160) (#158462)" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` ghstack-source-id: 4da3b4e Pull Request resolved: #159798
@davidberard98 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot merge -f "this is a revert; it should be relatively safe to land, and it is blocking some internal use cases that are affected by the bug" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Command
Details for Dev Infra teamRaised by workflow job |
This reverts commit 19f1f99. Reverted #159456 on behalf of https://github.com/davidberard98 due to Sorry - this causes a merge conflict with #159798, which I'm trying to land with co-dev to resolve a sev ([comment](#159456 (comment)))
Pull Request resolved: #159456 Approved by: https://github.com/Skylion007, https://github.com/malfet
…or with output size specified (#147160) (#158462)"" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` which leads to errors like ``` /tmp/torchinductor_dberard/3h/c3hlb22fpptebupstsuhl6kexa6z3upgbnyxln7c24gfcr5747iu.py:30: unknown: block: [0,0,0], thread: [10,0,0] Assertion `index out of bounds: 0 <= tmp5 < 4` failed. ``` cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Differential Revision: [D79591561](https://our.internmc.facebook.com/intern/diff/D79591561) [ghstack-poisoned]
…put size specified (#147160) (#158462)" This reverts commit 305a037. Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model) ``` import torch def ri(inp, repeats, output_size): return torch.repeat_interleave(inp, repeats, output_size=output_size) inp = torch.arange(0, 4, device="cuda").reshape(-1, 1) x = torch.tensor([1, 2, 3, 4], device="cuda") ri_c = torch.compile(ri) print(ri(inp, x, 10)) print(ri_c(inp, x, 10)) ``` ghstack-source-id: 09f0152 Pull Request resolved: #159798
@davidberard98 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot merge -f "this is a revert; it should be relatively safe to land, and it is blocking some internal use cases that are affected by the bug." |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
This reverts commit 305a037.
Reason: causes device-side assertion failures when running with this repro (a minimized version of a failure seen in a real model)
which leads to errors like
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben
Differential Revision: D79591561