You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently using scikit-learn classifier feature_importances_ attribute on a project to rank important features from my model, and my CI pipeline runs the project test-suite using instances of scikit-learn==1.3.2 and scikit-learn==1.5.2 on a remote linux host. I am experiencing some discrepancies in the output of the relevant test (for which I have provided a minimal viable reproducer below) on different machines/installations/sklearn versions.
There are a few specific problems I am experiencing:
Locally, the test will pass using a binary installation of scikit-learn==1.3.2 and fail using scikit-learn==1.5.2. With the help of my team, we have traced this error back and found the earliest failing version to be 1.4.1.post1. We suspect that the error originates from a change made in FIX force node values outside of [0, 1] range for monotonically constraints classification trees #27639 that has to do with the switch from absolute counts to store proportions in tree_.values but have not determined a root cause for the discrepancy.
As mentioned in (1) when running the test-suite locally on my Mac-ARM64 machine, the test will fail as described, however, when running the test on a remote linux machine, the test will pass with both sklearn versions
The test will fail when I build the code from source vs. from the binary distribution of scikit-learn==1.3.2
My main question is, what could be the cause of these observed discrepancies between sklearn version, installation type and environment and which output is most "correct"?
Steps/Code to Reproduce
fromsklearn.ensembleimportRandomForestClassifierimportpandasaspdimportnumpyasnpfrompandas.testingimportassert_frame_equalimportpdb# this test serves as a minimal viable reproducer for the# difference observed in output of tree values between# sklearn versions 1.3.2 and 1.4.2. this test should pass# when using sklearn==1.3.2 and fail when using sklearn==1.4.2# first create a minimal dataset of values for training# a Random Forest Classifierrandom_state=123rng=np.random.default_rng(random_state)
X=rng.integers(0, 2, size=(1000, 12))
y=np.asarray([1, 0] *500)
y[-1] =1X[:, 0] =yX[:, 1] =1-yclfr=RandomForestClassifier(n_estimators=100, random_state=random_state)
clfr.fit(X, y)
# find the importances of the estimator and check that ranking of the importancesimportances_out=clfr.feature_importances_# these are the importance values that are expected from ``sklearn==1.3.2``importances_exp=np.array(
[
0.52090464,
0.46263368,
0.00115268,
0.00179985,
0.00177495,
0.00169134,
0.00157653,
0.00135364,
0.00175814,
0.00169148,
0.00162767,
0.00203539,
]
)
importances_out=pd.DataFrame(importances_out).sort_values(by=0)
importances_exp=pd.DataFrame(importances_exp).sort_values(by=0)
assert_frame_equal(importances_out, importances_exp)
Expected Results
The expected values above, i.e. importances_exp represent the ranked feature importances of a "cooked" dataset where the input to the model is an array of random values, except two rows, which are perfectly (inversely) correlated to the target values y. As we expect, the two highly correlated values show the highest importance and the random values show the lowest importance. The test checks that the ranking of the input values is correct by comparing the DataFrames storing the sorted output values from clfr.feature_importances_.
Actual Results
The expected output above, which comes from feature_importances_ when using scikit-learn==1.3.2 differs by some floating point values from the output when using scikit-learn>=1.4.2, i.e.:
adamwitmer
changed the title
Difference between output of classifier feature_importances_ between sklearn 1.3.2 installations
Discrepancy between output of classifier feature_importances_ with different sklearn installations
May 22, 2025
I don't know if sklearn can actually guarantee stability here--if I change the random_state of the estimator only (i.e., use the same input data, just change the estimator seed), the number of index mismatches for the importances is often quite high (almost all of the random data), whether I use Mac or Linux on the provided reproducer.
That somewhat matches the intuition that if the randomly-permuted feature data is all roughly equally unimportant for deciding the response, the tree algorithm doesn't really have a sane way to distinguish their relative importances consistently.
It may be nice to have a way to bisect this as we discussed--that is currently being hindered by the source builds differing from the binaries.
My immediate intuition is that this could mean that the in-house test may be trying to enforce something that isn't formally guaranteed to be stable in practice (the feature rankings of very noisy/random features).
We certainly don't support comparing such values across versions.
And if you look at it, you can consider the disparity coming from floating point disparities. The actual difference in the feature importance values are minute.
As for which one is "more correct", we hope the latest release of course. That's why things change, people report issues, or we find them, and we fix them. That very often results in getting different results if you compare versions.
Describe the bug
I am currently using
scikit-learn
classifierfeature_importances_
attribute on a project to rank important features from my model, and myCI
pipeline runs the project test-suite using instances ofscikit-learn==1.3.2
andscikit-learn==1.5.2
on a remote linux host. I am experiencing some discrepancies in the output of the relevant test (for which I have provided a minimal viable reproducer below) on different machines/installations/sklearn versions.There are a few specific problems I am experiencing:
scikit-learn==1.3.2
and fail usingscikit-learn==1.5.2
. With the help of my team, we have traced this error back and found the earliest failing version to be1.4.1.post1
. We suspect that the error originates from a change made in FIX force node values outside of [0, 1] range for monotonically constraints classification trees #27639 that has to do with the switch from absolute counts to store proportions intree_.values
but have not determined a root cause for the discrepancy.Mac-ARM64
machine, the test will fail as described, however, when running the test on a remote linux machine, the test will pass with both sklearn versionsscikit-learn==1.3.2
My main question is, what could be the cause of these observed discrepancies between sklearn version, installation type and environment and which output is most "correct"?
Steps/Code to Reproduce
Expected Results
The expected values above, i.e.
importances_exp
represent the ranked feature importances of a "cooked" dataset where the input to the model is an array of random values, except two rows, which are perfectly (inversely) correlated to the target valuesy
. As we expect, the two highly correlated values show the highest importance and the random values show the lowest importance. The test checks that the ranking of the input values is correct by comparing theDataFrames
storing the sorted output values fromclfr.feature_importances_
.Actual Results
The expected output above, which comes from
feature_importances_
when usingscikit-learn==1.3.2
differs by some floating point values from the output when usingscikit-learn>=1.4.2
, i.e.:where the ranking of the values is changed by the discrepancy between floating point values of the lower ranked features:
Versions
The text was updated successfully, but these errors were encountered: