-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Remove median_absolute_error
from METRICS_WITHOUT_SAMPLE_WEIGHT
#30787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@@ -1550,12 +1549,13 @@ def check_sample_weight_invariance(name, metric, y1, y2): | |||
), | |||
) | |||
def test_regression_sample_weight_invariance(name): | |||
n_samples = 50 | |||
random_state = check_random_state(0) | |||
n_samples = 51 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This cannot be an even number due to #17370, inspired by @ogrisel last paragraph in #17370 (comment), even though I agree it's a temporary fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since #29907 was merged, it should be possible to update median_absolute_error
to handle even number of data points (with or without weights) as explained below.
sklearn/metrics/tests/test_common.py
Outdated
n_samples = 50 | ||
random_state = check_random_state(0) | ||
n_samples = 51 | ||
random_state = check_random_state(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand why this test would be 'flaky' for median_absolute_error
for the 'check that the weighted and unweighted scores are unequal' check in check_sample_weight_invariance
. Both 0 and 2 fail here 🤷
To check that it was not our _weighted_percentile
doing something wrong, I used numpys quantile
with sample weights (which I trust, because I saw that PR and the scrutiny and all the tests):
import numpy as np
from sklearn.metrics import median_absolute_error
from sklearn.utils.validation import check_random_state
n_samples = 51
random_state = check_random_state(0)
# regression
y_true = random_state.random_sample(size=(n_samples,))
y_pred = random_state.random_sample(size=(n_samples,))
rng = np.random.RandomState(0)
sample_weight = rng.randint(1, 10, size=len(y_true))
no_weights = np.median(np.abs(y_pred - y_true), axis=0)
# gives np.float64(0.26234528145390845)
weights = np.quantile(
np.abs(y_pred - y_true), 0.5, axis=0, method='inverted_cdf',
weights=sample_weight
)
# gives array(0.26234528)
I may be missing something, but I don't know why this would be likely to be the same with and without wights.
Note that changing max value in sample_weight = rng.randint(1, 10, size=len(y_true))
from 10 to a smaller value, 5 or 8 etc, was also very effective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On even numbers of samples (taking weights into account), inverted_cdf
yields an asymmetric result, while np.median
(on repeated samples) yields a symmetric result (as it uses the "linear" interpolation definition of "in between data points" quantiles).
We need to use a symmetric version of _weighted_percentile
which named _averaged_weighted_percentile
which was merged yesterday as part of #29907 which is equivalent to the method="averaged_inverted_cdf"
of np.quantile
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I was not clear, random state seed being 0 or 2 fail. But 1 passes.
The specific check that is failing is:
scikit-learn/sklearn/metrics/tests/test_common.py
Lines 1466 to 1475 in 2c2e970
# check that the weighted and unweighted scores are unequal | |
weighted_score = metric(y1, y2, sample_weight=sample_weight) | |
# use context manager to supply custom error message | |
with pytest.raises(AssertionError): | |
assert_allclose(unweighted_score, weighted_score) | |
raise ValueError( | |
"Unweighted and weighted scores are unexpectedly " | |
"almost equal (%s) and (%s) " | |
"for %s" % (unweighted_score, weighted_score, name) |
Which is odd right? See above sanity check.
Also I agree that it would be better to use numpy implementation of weights but our min support version is not high enough. Do you think we should vendor the code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I looked into this more, and it seems that this is just not a good check for median_absolute_error
(red line is median value, weighted 'median' is calculated with np.quantile(method="inverted_cdf")
):
Code
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(12, 10))
# Different weight ranges for each row
weight_ranges = [(None, "No Weights"), (np.arange(1, 10), "Weights (1-9)"), (np.arange(1, 5), "Weights (1-5)")]
# Different random seeds for each column
seeds = [0, 1, 2]
# Loop over rows (weights) and columns (seeds)
for row, (weight_range, weight_label) in enumerate(weight_ranges):
for col, seed in enumerate(seeds):
random_state = check_random_state(seed)
y_true = random_state.random_sample(size=(51,))
y_pred = random_state.random_sample(size=(51,))
values = np.abs(y_pred - y_true)
rng = np.random.RandomState(seed)
sample_weight = rng.randint(weight_range.min(), weight_range.max() + 1, size=len(y_true)) if weight_range is not None else None
# Plot histogram
if sample_weight is None:
axes[row, col].hist(values, bins=15, edgecolor="black", alpha=0.7, density=True)
median_value = np.median(values)
else:
axes[row, col].hist(values, bins=15, weights=sample_weight, edgecolor="black", alpha=0.7, density=True)
median_value = np.quantile(values, 0.5, method="inverted_cdf", weights=sample_weight)
# Add median line
axes[row, col].axvline(median_value, color="red", linestyle="dashed", linewidth=1, label="Median")
# Set title
axes[row, col].set_title(f"Seed {seed} - {weight_label}")
# Adjust layout for readability
plt.tight_layout()
plt.show()
Weights don't change the distribution of abs(y_pred - y_true)
enough to change the median value.
We can change the test to make this check (e.g., we could try adding only high or low weights to a subset of the values), but I am not 100% sure about changing checking this for all regression metrics. To be fair this test is just ensuring that weighted and un-weighted are different, so it should probably be okay?
cc @ogrisel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I agree that it would be better to use numpy implementation of weights but our min support version is not high enough. Do you think we should vendor the code?
If the complexity of vendoring is not to high, we should definitely benefit from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I see. I think that I would define weights as np.arange(len(y_true))
and I think that it would good enough for this test. I indeed agree with your analysis @lucyleeow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll check the subsequent changes in the test to see if np.arange
could be an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to use _averaged_weighted_percentile
directly to latter be able to add an array API compatible version.
At the time of writing numpy.quantile(..., method="averaged_inverted_cdf")
does not support weights (yet) and is not part of the array API spec. So we need a version that supports both weights, the symmetric behavior with even numbers of data points, and possible array API support.
This is a waste, but shape consistency checks are not very costly (compared to value dependent checks, e.g. checking that there are no negative weights or no nan/inf weights), so maybe we can live with it. Unless, removing those redundant checks can help simplify the code, in which case we should do it. |
|
Thanks @ogrisel. With regard to duplicate checks: #30787 (comment), thinking about it more, I think the problem is more:
What do you think of WDYT? |
It sounds like a good plan. |
random_state = check_random_state(0) | ||
# regression | ||
y_true = random_state.random_sample(size=(n_samples,)) | ||
y_pred = random_state.random_sample(size=(n_samples,)) | ||
sample_weight = np.arange(len(y_true)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed as suggested in #30787 (comment). This works great.
I've amended check_sample_weight_invariance
to optionally take sample_weight
as I didn't want to change the sample_weight
for all tests that use check_sample_weight_invariance
, WDYT @glemaitre ?
b353a95
to
92884bc
Compare
What does this implement/fix? Explain your changes.
sample_weights
was added tomedian_absolute_error
in 0.24 but it was never removed fromMETRICS_WITHOUT_SAMPLE_WEIGHT
. Removing it has highlighted several issues:Redundancy in checking
I had to add
check_consistent_length
tomedian_absolute_error
to get the same error message as other metrics wheny_pred
,y_true
orsample_weights
are not of the same length. - it looks like this check was added to most sample weight metrics in #9903 but notmedian_absolute_error
.However it is worth noting that there is redundancy is our checking.
_check_reg_targets*
- checksy_pred
,y_true
are of consistent length, performscheck_array
, checksmultioutput
is acceptable, and various other reg related checks.check_consistent_length
- checks thaty_pred
,y_true
andsample_weights
are of the same length, used in most (all?) regression metrics_check_sample_weight
- checkssample_weights
is the same length asy
, performscheck_array
onsample_weights
, various other checks.If all 3 checks are done in a metrics, we are effectively checking the
sample_weight
is the correct length 3 times.Quantile problems
median_absolute_error
failscheck_sample_weight_invariance
intest_regression_sample_weight_invariance
- I will put detailed description in review comments.Classification data used to test regression metrics
test_multilabel_sample_weight_invariance
fails with:This makes sense because we are passing multilabel classification data (0/1's) to
MULTIOUTPUT_METRICS
which are Regression metrics with "multioutput-continuous" format support (e.g., "mean_squared_error", "r2_score" etc). I am not sure why we would not use regression data for these metrics? The tests do pass for all the other regression metrics, but asabs(y_pred - y_true)
would be either 1 or 0 for every sample, it is very likely that weighted and unweightedmedian_absolute_error
would be the same value.I think we should amend
test_multilabel_sample_weight_invariance
so multi-output regression data is passed to theMULTIOUTPUT_METRICS
tests (any maybe even change the name of this various to make it clear that these are regression metrics).Any other comments?
This is a draft as I don't think this PR should be merged without resolving underlying issues.
cc @glemaitre @ogrisel