-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Update upstream opinfo to generate appropriately scaled sample inputs #158018
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158018
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit a2174ed with merge base 178515d ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot rebase |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
5c2a209
to
5192752
Compare
…ppropriately-scaled-sample-inputs
This seems unrelated? I've made no changes to inductor |
@drisspg Could I get a re-review? |
…ppropriately-scaled-sample-inputs
Currently, opinfo generates random inputs for
_scaled_mm
but does not enforce type saturation, unlike the upstream test implementation which explicitly saturates both fp8 data types.Problem:
The current random input generation in
sample_inputs_scaled_mm
may produce values outside the valid range for the input types, potentially missing edge cases that the CUDA tests intentionally cover.Solution:
Modify the sample_inputs_scaled_mm implementation to:
_scaled_mm
testspytorch/test/test_matmul_cuda.py
Lines 1007 to 1052 in 52e4e41
This required a bit of alteration to adapt to the late realization of the inputs. I think I have done this correctly, but open to suggestions.