Skip to content

Remove median_absolute_error from METRICS_WITHOUT_SAMPLE_WEIGHT #30787

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

lucyleeow
Copy link
Member

What does this implement/fix? Explain your changes.

sample_weights was added to median_absolute_error in 0.24 but it was never removed from METRICS_WITHOUT_SAMPLE_WEIGHT. Removing it has highlighted several issues:

Redundancy in checking

I had to add check_consistent_length to median_absolute_error to get the same error message as other metrics when y_pred, y_true or sample_weights are not of the same length. - it looks like this check was added to most sample weight metrics in #9903 but not median_absolute_error.

However it is worth noting that there is redundancy is our checking.

  • _check_reg_targets* - checks y_pred, y_true are of consistent length, performs check_array , checks multioutput is acceptable, and various other reg related checks.
  • check_consistent_length - checks that y_pred, y_true and sample_weights are of the same length, used in most (all?) regression metrics
  • _check_sample_weight - checks sample_weights is the same length as y, performs check_array on sample_weights, various other checks.

If all 3 checks are done in a metrics, we are effectively checking the sample_weight is the correct length 3 times.

Quantile problems

median_absolute_error fails check_sample_weight_invariance in test_regression_sample_weight_invariance - I will put detailed description in review comments.

Classification data used to test regression metrics

test_multilabel_sample_weight_invariance fails with:

ValueError: Unweighted and weighted scores are unexpectedly almost equal (0.0) and (0.0) for median_absolute_error

This makes sense because we are passing multilabel classification data (0/1's) to MULTIOUTPUT_METRICS which are Regression metrics with "multioutput-continuous" format support (e.g., "mean_squared_error", "r2_score" etc). I am not sure why we would not use regression data for these metrics? The tests do pass for all the other regression metrics, but as abs(y_pred - y_true) would be either 1 or 0 for every sample, it is very likely that weighted and unweighted median_absolute_error would be the same value.

I think we should amend test_multilabel_sample_weight_invariance so multi-output regression data is passed to the MULTIOUTPUT_METRICS tests (any maybe even change the name of this various to make it clear that these are regression metrics).

Any other comments?

This is a draft as I don't think this PR should be merged without resolving underlying issues.
cc @glemaitre @ogrisel

@lucyleeow lucyleeow marked this pull request as draft February 8, 2025 10:34
Copy link

github-actions bot commented Feb 8, 2025

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: 92884bc. Link to the linter CI: here

@@ -1550,12 +1549,13 @@ def check_sample_weight_invariance(name, metric, y1, y2):
),
)
def test_regression_sample_weight_invariance(name):
n_samples = 50
random_state = check_random_state(0)
n_samples = 51
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be an even number due to #17370, inspired by @ogrisel last paragraph in #17370 (comment), even though I agree it's a temporary fix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since #29907 was merged, it should be possible to update median_absolute_error to handle even number of data points (with or without weights) as explained below.

n_samples = 50
random_state = check_random_state(0)
n_samples = 51
random_state = check_random_state(1)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand why this test would be 'flaky' for median_absolute_error for the 'check that the weighted and unweighted scores are unequal' check in check_sample_weight_invariance. Both 0 and 2 fail here 🤷

To check that it was not our _weighted_percentile doing something wrong, I used numpys quantile with sample weights (which I trust, because I saw that PR and the scrutiny and all the tests):

import numpy as np

from sklearn.metrics import median_absolute_error
from sklearn.utils.validation import check_random_state

n_samples = 51
random_state = check_random_state(0)
# regression
y_true = random_state.random_sample(size=(n_samples,))
y_pred = random_state.random_sample(size=(n_samples,))

rng = np.random.RandomState(0)
sample_weight = rng.randint(1, 10, size=len(y_true))

no_weights = np.median(np.abs(y_pred - y_true), axis=0)
# gives np.float64(0.26234528145390845)

weights = np.quantile(
    np.abs(y_pred - y_true), 0.5, axis=0, method='inverted_cdf',
    weights=sample_weight
)
# gives array(0.26234528)

I may be missing something, but I don't know why this would be likely to be the same with and without wights.

Note that changing max value in sample_weight = rng.randint(1, 10, size=len(y_true)) from 10 to a smaller value, 5 or 8 etc, was also very effective.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On even numbers of samples (taking weights into account), inverted_cdf yields an asymmetric result, while np.median (on repeated samples) yields a symmetric result (as it uses the "linear" interpolation definition of "in between data points" quantiles).

We need to use a symmetric version of _weighted_percentile which named _averaged_weighted_percentile which was merged yesterday as part of #29907 which is equivalent to the method="averaged_inverted_cdf" of np.quantile.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I was not clear, random state seed being 0 or 2 fail. But 1 passes.
The specific check that is failing is:

# check that the weighted and unweighted scores are unequal
weighted_score = metric(y1, y2, sample_weight=sample_weight)
# use context manager to supply custom error message
with pytest.raises(AssertionError):
assert_allclose(unweighted_score, weighted_score)
raise ValueError(
"Unweighted and weighted scores are unexpectedly "
"almost equal (%s) and (%s) "
"for %s" % (unweighted_score, weighted_score, name)

Which is odd right? See above sanity check.

Also I agree that it would be better to use numpy implementation of weights but our min support version is not high enough. Do you think we should vendor the code?

Copy link
Member Author

@lucyleeow lucyleeow Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay I looked into this more, and it seems that this is just not a good check for median_absolute_error (red line is median value, weighted 'median' is calculated with np.quantile(method="inverted_cdf")):

image

Code
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(12, 10))

# Different weight ranges for each row
weight_ranges = [(None, "No Weights"), (np.arange(1, 10), "Weights (1-9)"), (np.arange(1, 5), "Weights (1-5)")]

# Different random seeds for each column
seeds = [0, 1, 2]

# Loop over rows (weights) and columns (seeds)
for row, (weight_range, weight_label) in enumerate(weight_ranges):
    for col, seed in enumerate(seeds):
        random_state = check_random_state(seed)
        y_true = random_state.random_sample(size=(51,))
        y_pred = random_state.random_sample(size=(51,))
        values = np.abs(y_pred - y_true)

        rng = np.random.RandomState(seed)
        sample_weight = rng.randint(weight_range.min(), weight_range.max() + 1, size=len(y_true)) if weight_range is not None else None

        # Plot histogram
        if sample_weight is None:
            axes[row, col].hist(values, bins=15, edgecolor="black", alpha=0.7, density=True)
            median_value = np.median(values)
        else:
            axes[row, col].hist(values, bins=15, weights=sample_weight, edgecolor="black", alpha=0.7, density=True)
            median_value = np.quantile(values, 0.5, method="inverted_cdf", weights=sample_weight)

        # Add median line
        axes[row, col].axvline(median_value, color="red", linestyle="dashed", linewidth=1, label="Median")

        # Set title
        axes[row, col].set_title(f"Seed {seed} - {weight_label}")
# Adjust layout for readability
plt.tight_layout()
plt.show()

Weights don't change the distribution of abs(y_pred - y_true) enough to change the median value.

We can change the test to make this check (e.g., we could try adding only high or low weights to a subset of the values), but I am not 100% sure about changing checking this for all regression metrics. To be fair this test is just ensuring that weighted and un-weighted are different, so it should probably be okay?

cc @ogrisel

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I agree that it would be better to use numpy implementation of weights but our min support version is not high enough. Do you think we should vendor the code?

If the complexity of vendoring is not to high, we should definitely benefit from it.

Copy link
Member Author

@lucyleeow lucyleeow Feb 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Considering we added _averaged_weighted_percentile recently in #29907 and also the open PR #29034 , I agree that vendoring is a good option... (as we are having to spend a lot of effort maintaining this function anyway)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I see. I think that I would define weights as np.arange(len(y_true)) and I think that it would good enough for this test. I indeed agree with your analysis @lucyleeow.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll check the subsequent changes in the test to see if np.arange could be an issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to use _averaged_weighted_percentile directly to latter be able to add an array API compatible version.

At the time of writing numpy.quantile(..., method="averaged_inverted_cdf") does not support weights (yet) and is not part of the array API spec. So we need a version that supports both weights, the symmetric behavior with even numbers of data points, and possible array API support.

@ogrisel
Copy link
Member

ogrisel commented Feb 8, 2025

If all 3 checks are done in a metrics, we are effectively checking the sample_weight is the correct length 3 times.

This is a waste, but shape consistency checks are not very costly (compared to value dependent checks, e.g. checking that there are no negative weights or no nan/inf weights), so maybe we can live with it. Unless, removing those redundant checks can help simplify the code, in which case we should do it.

@ogrisel
Copy link
Member

ogrisel commented Feb 8, 2025

I think we should amend test_multilabel_sample_weight_invariance so multi-output regression data is passed to the MULTIOUTPUT_METRICS tests (any maybe even change the name of this various to make it clear that these are regression metrics)

test_multilabel_sample_weight_invariance implies classification to me. We should probably have two different functions, one for multilabel classification and another for multioutput regression. And maybe another one for multioutput multiclass classification (if needed, I am not sure we have any metric with an API that can handle this type of predictions).

@lucyleeow
Copy link
Member Author

Thanks @ogrisel.

With regard to duplicate checks: #30787 (comment), thinking about it more, I think the problem is more:

  • Lack of consistency between metrics. Only 2 regression metrics _check_sample_weight but AFAICT other metrics that accept sample_weight would also benefit from this check?
  • It's hard to tell what is being checked where, and which check will ultimately raise an error that the user sees:
    • It looks like the new array API supporting (regression) metrics use _check_reg_targets_with_floating_dtype, which does take sample_weight BUT it only matches sample_weights dtype, and does not do length consistency check or check_array because _check_reg_targets does not accept sample_weight
    • check_consistent_length is generally called twice. The first time it is called within _check_reg_targets and raises an error when y_pred and y_true are inconsistent. The second time, it is passed y_pred, y_true and sample_weight but will only raise an error when sample_weight is incorrect, as it was already called inside _check_reg_targets to check y_pred and y_true.

What do you think of _check_reg_targets_with_floating_dtype also doing the sample_weight checks, if sample_weight is passed? Or at least we could reduce to 2 checks; _check_reg_targets* and _check_sample_weight?

WDYT?

@ogrisel
Copy link
Member

ogrisel commented Feb 11, 2025

It sounds like a good plan.

@glemaitre glemaitre self-requested a review February 12, 2025 09:55
random_state = check_random_state(0)
# regression
y_true = random_state.random_sample(size=(n_samples,))
y_pred = random_state.random_sample(size=(n_samples,))
sample_weight = np.arange(len(y_true))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed as suggested in #30787 (comment). This works great.

I've amended check_sample_weight_invariance to optionally take sample_weight as I didn't want to change the sample_weight for all tests that use check_sample_weight_invariance, WDYT @glemaitre ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants