-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Enhance ROC Curve Display Tests for Improved Clarity and Maintainability #31266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Replaced the `data_binary` fixture that filtered classes from a multiclass dataset with a new fixture generating a synthetic binary classification dataset using `make_classification`. This ensures consistent data characteristics, introduces label noise, and better simulates real-world classification challenges.
@lucyleeow i guess everything is good now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! My only nit is that I am not sure we need 20 features but I'll let the 2nd reviewer decide that.
@lucyleeow, I added 20 features to prevent overfitting. Without them we'd likely get a perfect ROC AUC of 1.0. Despite the added features, training time remains fast, so performance isn't a concern |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@NEREUScode thanks for explaining. What AUC do you get with 20 features? And what AUC do you get with e.g., 10? |
@lucyleeow 's question stands, but looks good anyway. |
@lucyleeow I'll run more tests to see if the feature number needs adjusting |
PR Description:
Summary of Changes:
This PR refactors the
data_binary
fixture in thetest_roc_curve_display.py
file. The previous fixture filtered a multiclass dataset (Iris) to create a binary classification task. However, this approach resulted in AUC values consistently reaching 1.0, which does not reflect real-world challenges.The new fixture utilizes
make_classification
fromsklearn.datasets
to generate a synthetic binary classification dataset with the following characteristics:flip_y=0.1
) to simulate real-world imperfections in the data.class_sep=0.8
) set to avoid perfect separation.These changes provide a more complex and representative dataset for testing the
roc_curve_display
function and other related metrics, thereby improving the robustness of tests.Reference Issues/PRs:
test_roc_curve_display.py
#31243from_cv_results
inRocCurveDisplay
(singleRocCurveDisplay
) #30399 (comment)For Reviewers: