Skip to content

Add TORCH_CHECK for group < channels for native_channel_shuffle #153781

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

aishwaryar12309
Copy link
Contributor

@aishwaryar12309 aishwaryar12309 commented May 17, 2025

Fixes #153231

Changes

  • Added TORCH_CHECK(groups <= channels) to prevent silent misbehavior in native_channel_shuffle when the number of groups is larger than the input's channel dimension. Files changed: aten/src/ATen/native/ChannelShuffle.cpp

Motivation

  • Previously, the function allowed groups > channels, which silently produced incorrect behavior or segmentation faults downstream. This check ensures correctness and surfaces misuse early.

Labels: module: nn, module: crash, module: edge cases
cc @albanD

Copy link

pytorch-bot bot commented May 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153781

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 4d1aa81 with merge base 8568dbc (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@HDCharles HDCharles requested a review from albanD May 20, 2025 17:47
@HDCharles HDCharles added topic: bug fixes topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 20, 2025
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!
Would you be able to add a small test in test_torch.py to make sure it raises as expected now?

@aishwaryar12309
Copy link
Contributor Author

I can add a test!

@malfet
Copy link
Contributor

malfet commented Jun 6, 2025

Thanks! Would you be able to add a small test in test_torch.py to make sure it raises as expected now?

Or to OpInfo as error_inputs ;)

@malfet malfet added the release notes: nn release notes category label Jun 6, 2025
@huydhn
Copy link
Contributor

huydhn commented Jun 6, 2025

From the OH, you could add USE_CUDA=0 to build without CUDA. The local build is much faster that way

@abhishek-iitmadras
Copy link
Collaborator

This https://github.com/pytorch/pytorch/wiki/lintrunner help to pass lint failure.

Copy link
Contributor

github-actions bot commented Aug 5, 2025

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Aug 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
open source release notes: nn release notes category Stale topic: bug fixes topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

torch.native_channel_shuffle crashes with Floating Point Exception when given large integer parameter
7 participants